Quantcast
Channel: Blog – KongHQ
Viewing all 463 articles
Browse latest View live

Kubernetes: How Did We Get Here?

$
0
0

This is the first of two blogs examining the history and future of Kubernetes. For more information, check out our e-book Kubernetes: The Future of Infrastructure.

 

Kubernetes is a hot topic right now, and with good reason – it represents a large component of the future of software development. If you’re reading this, you may already be familiar with some of the benefits promised by Kubernetes, such as simplified management, enhances scalability, and increased visibility, among others.  However, it’s hard to appreciate the gravity of these advancements without the right context. To properly frame the benefits of Kubernetes, we’ll ask the two questions inherent with every technological advancement – “How did we get here?” and “Where are we going?”. In this post, we’ll focus on the first of these by examining the IT developments that led us to where we are today, and that laid the groundwork for Kubernetes.

Better than Baremetal

Once upon a time, if you wanted to grow your infrastructure to support software development, you had to purchase additional servers and physically scale it to meet your application needs. This was less than ideal. It was resource- and time-intensive to build out, and then you needed to maintain it for performance and availability, which layered on even more time and expense. Fortunately, a company called VMWare introduced the world to virtualization (virtual machines), bringing with it increased flexibility, scalability, reliability, and overall performance while lowering expenses. This advancement led to an explosion of innovation in software development. With infrastructure presenting less of a bottleneck, developing software became cheaper and faster. However, just as with bare metal in the past, the demands of software development would begin to outstrip what virtual machines (VMs) could offer.

Evolving App Development

There are many parallels between the rise of virtual machines and the rise of containers. At their simplest, Containers build upon VMs in the same way VMs built on bare metal – that is that they allow us to get the most out of our infrastructure. Containers, however, accomplish this at a much larger scale. With containers, we can leverage more processes on each virtual machine to increase its efficiency. We can also run any type of workload within a container because it is isolated, ensuring that each workload is protected from the others. This resource efficiency comes with obvious benefits. It brings a more efficient approach to software development, increasing engineering agility by reducing wasted resources and empowering teams to build and share code more rapidly in the form of microservices. On top of this, containerization improves scalability through a more lightweight and resource-efficient approach. 

Ease of use is another important benefit of containers. Similar to how virtual machines were easier to create, scale, and manage compared to physical hardware, containers make it even easier to build software because they can start up in a few seconds. Containers enable us to run a lightweight, isolated process on top of our existing virtual machine, letting us quickly and easily scale without getting bogged down with DevOps busy work.  Similar to VM orchestration tools, container orchestration presents us with an opportunity to further enable and enhance the benefits of containers.

The Case for Container Orchestration

Container adoption has necessitated container orchestration in the same way that adoption of virtualization forced companies to use tools to launch, monitor, create, and destroy their VMs. Like VMs, Containers must be monitored and orchestrated to ensure they are working properly. The manual method of accomplishing this would run the risk of losing many of the primary benefits containers offer. For instance, if we wanted to run multiple containers across multiple servers and virtual machines — necessary for using microservices — handling all of the moving parts would require a huge DevOps effort. These many moving pieces require us to answer several questions, such as when to start the right containers, how to ensure the containers can talk to each other, what the storage considerations are, and how to ensure high availability across our infrastructure? Fortunately, Tools like Kubernetes accomplish just that, allowing developers to better track, schedule and operationalize various containers at scale. This allows us to realize more of the value of containers and microservices, and help open the door to transforming the way that we develop, maintain, and improve software. 

So, you want to know how Kubernetes works, how it will change infrastructure, and how it can help you? Check back with us next week where we’ll dive into “Where we’re going” with Kubernetes.

The post Kubernetes: How Did We Get Here? appeared first on KongHQ.


Kong CE 0.14 Feature Review – Nginx Injected Directives

$
0
0

 

As part of our series helping you get up to speed on the new features released in Kong CE 0.14, we want to dive into one of our most exciting and long-awaited features – Dynamic Injection for Nginx Directives. This new feature enables Kong users to easily exercise greater control over their Nginx configurations and eliminate tedious maintenance work to maintain configurations through new Kong releases.

As you are likely aware, Kong ships with a Nginx template that renders when Kong starts. This allows folks to easily get started with Kong, but it also creates challenges for users that want to modify their Nginx configurations. Before this release, there was no mechanism to add or update a Nginx directive within the Nginx.conf used to run Kong. Instead users had to create a custom Nginx template which they needed to update every time that they updated Kong. This created time-consuming maintenance work and the potential for unforeseen issues.

Fortunately, dynamic injection of Nginx directives eliminates these challenges. In CE 0.14 Kong users can now specify any Nginx directive directly in their Kong config file, removing the need to constantly update the Nginx config. To accomplish this, Kong users can specify Nginx directives by using config variables with prefixes, helping to determine the block in which to place a directive.

For Example:

Adding the following line in your kong.conf:

nginx_proxy_large_client_header_buffers=8 24k

will add the following directive to the proxy `server` block of Kong’s Nginx configuration file:

large_client_header_buffers 8 24k;

Like all properties in `kong.conf`, this can also be specified via environment variables:

export KONG_NGINX_PROXY_LARGE_CLIENT_HEADER_BUFFERS=8 24k

It is also possible to include entire `server` blocks using the Nginx `include ‘directive. Here is a good example. Our Docker users can simply mount a volume on Kong’s container and use the `include` directive to include custom Nginx server blocks or directives.

 

Kong’s method of injecting Nginx directives provides dramatically improved flexibility and ease-of-use for users employing granular control over Nginx.

Benefits include:

  • Changes to Nginx are automatically reflected in Kong
  • Custom Nginx modules work out-of-the-box
  • No tedious maintenance work while upgrading Kong versions
  • Avoid changes to existing code
  • Confidence that new Nginx directives will not break Kong

 

At Kong, we’re committed to Open Source and empowering users to make their own decisions. We know that many of our users are Nginx Ninjas that want to exercise more control over Nginx through Kong, and we are happy to make your lives easier. For users with custom Nginx modules, legacy Nginx configurations, or those that want to experiment with changes to their Nginx config, our 0.14 release will enable you to modify Nginx directives easily and without risk to production.

And of course, thank you to our open source contributors, core maintainers (@hisham @bungle @kikito) and other Kong Inc. employees who all contributed a great lot to this release!

Happy Konging!

The post Kong CE 0.14 Feature Review – Nginx Injected Directives appeared first on KongHQ.

Kubernetes: Where Are We Going?

$
0
0

This is the second of two blogs examining the history and future of Kubernetes. For more information, check out our e-book Kubernetes: The Future of InfrastructureIn our last blog, we explored the rise of containers and how they created a need for container orchestration tools like Kubernetes. In this blog, we’ll explore where Kubernetes and container orchestration as a whole will take us.

 

The future of Kubernetes is intimately tied to the future of containers and microservices. While it’s possible to transition to microservices without containers, the benefits are not as pronounced. Containers offer finer-grained execution environments, better server utilization, and better response to workloads that are less predictable, however, robust container orchestration is essential to take full advantage of these benefits. Kubernetes facilitates container adoption by providing robust orchestration for running deployments with thousands of containers in production, which is critical for a microservices architecture. This is likely a key driver of why a recent survey by The New Stack reported that 60% of respondents who deployed containers in production rely on Kubernetes for orchestration. For companies adopting containers, K8S automates many of the painful manual tasks and infrastructure complexity associated with deploying, scaling, and managing containerized applications. Simply put, Kubernetes addresses many of the core questions and challenges associated with deploying a container-based microservices architecture.

Common questions that are addressed by Kubernetes include:

  • How do we design applications that may consist of many moving parts, but can still be easily deployed and orchestrated?
  • How can we design applications that can be easily moved from one cloud to another?
  • How can we keep storage consistent with multiple instances of an application?
  • How can we ensure load is evenly distributed across all containers?
  • How can we reuse our existing technical skills and techniques on application development and design without deep diving into other areas that can slow us down?

Kubernetes and the Future of Microservices Architectures

Since microservices allow for applications to be engineered into smaller independent services that don’t depend on a specific coding language, application components can be combined to offer the full breadth of functionality of a traditional monolithic application.  To accomplish this, however, components must be properly monitored and managed across systems. Without effective orchestration tools, container monitoring and management would become highly taxing for DevOps teams as the number of microservices proliferates across systems. Kubernetes solves this problem by being vendor agnostic, meaning it can function as a single fabric for orchestrating containers across all our systems. This allows us to have multiple teams building various components across languages and systems without any compatibility issues. For example, we can have a team running a microservice in Ruby with a container running on Kubernetes communicate seamlessly with a microservice built-in Python running in a container via Kubernetes, regardless of whether those containers are on-prem or in the cloud. This affords numerous advantages to development teams as they’re able to easily push containers from on-prem environments to the cloud, across clouds, and back again. This flexibility substantially reduces the risk of vendor lock-in and provides developers with the freedom to run workloads where they want without concern for compatibility.

We’ve talked about how Kubernetes allows us to address key considerations for deploying containers in production, but beyond this, Kubernetes is also paving the way for organizations to transition from VMs to containers. Kubernetes offers a growing ecosystem of out-of-the-box tools, such as monitoring tools, CI/CD tools, and many others that natively support Kubernetes. As a result, the decision to use Kubernetes is not solely based on how it performs in production, but also how it provides an organization with a path to adopting containers.  The Kubernetes ecosystem offers all the building blocks for everything we need to leverage containers to build out a rock solid microservices architecture. Consequently, as the Kubernetes ecosystem matures it’s likely that container and microservices adoption will further accelerate.

Alternatives to K8S

There are a couple of other tools that also provide container orchestration capabilities. The two biggest players competing with Kubernetes are Docker Swarm and Mesosphere DC/OS. Docker Swarm is an easier-to-use option, which hits Kubernetes where they receive the most criticism around being very complex to deploy and manage. Mesosphere DC/OS is a container orchestrator that was designed for big data. It was designed to run containers alongside other workloads such as machine learning and big data and offers integrations with related tools such as Apache Spark and Apache Cassandra.

Overall, Kubernetes is currently the most broadly adopted and mature of container orchestration tools, as evidenced by the number of community contributors and enterprise adoption. The keys to K8S success has been the ability to provide not only the building blocks for launching and monitoring containers but also their efforts on creating different sets of container use cases on top of their platform to address different types of advanced workloads. In Kubernetes for example, we can find native objects, native entities within the system that allow us to start a daemon, or to start a container, or to start a database. For other solutions, there is no distinguishing between containers that are running something that could be destroyed at any time.

The Future is Bright

Containers are rapidly taking over the world of software development, and the momentum behind Kubernetes is accelerating. It has become the go-to container orchestrator through its deep expertise, enterprise adoption, and robust ecosystem. With a growing number of contributors and service providers backing it, Kubernetes will continue to improve and expand upon its functionality, the types of applications it can support, and integrations with the overarching ecosystem. The combination of these factors will further accelerate and enhance the use of containers and microservices to fundamentally reshape the way in which software is developed, deployed, and improved upon. Further, as container orchestration and microservices continue to mature, they will open the door to the adoption of new deployment patterns, development practices, and business models.

 

 

The post Kubernetes: Where Are We Going? appeared first on KongHQ.

Announcing Kong EE 0.33 – Featuring RBAC, Workspaces, Prometheus Plugin, and More!

$
0
0

We’re excited to announce our biggest Enterprise Edition release to date – Kong EE 0.33. We’ve packed this release full of new features and capabilities, improvements to your favorite Kong tools, and bug fixes to maximize your Kong experience.

Below, we’ll dive into the key reasons why you should be excited to start your Kong journey or upgrade your existing deployment. Be sure to check out the changelog for the full details. Happy Konging!

What’s New?

– New Features –

New RBAC Implementation

Take granular control of your resources with Kong’s new RBAC implementation supporting both endpoint and entity-level access control. Stop worrying about whether the wrong people can access your resources. Kong’s RBAC allows you to set up roles with different permission levels to easily ensure that the only people accessing a given resource are the ones you want. Gain complete flexibility to define access. Define the roles and level of access that you want, and assign individuals or teams to those roles. Check out the Documentation.

Workspaces

Build the most efficient teams possible with Kong’s new Workspaces feature. Improve your workflows by grouping APIs & plugins by teams. Restrict access so that teams only see what they need and what they’re authorized for. Keeps all resource associated with a particular team isolated only to that team to reduce clutter, minimize the potential for errors, and eliminate security issues. Note that workspaces are available in the API only, and not in the Admin GUI.

– New Plugins –

Prometheus

Want to use Prometheus to monitor your Kong cluster performance? Now you can. Use Kong’s Prometheus plugin to expose metrics related to Kong and proxied upstream services in Prometheus exposition format. Gain visibility into performance metrics, including: status codes, latency histograms, bandwidth, DB reachability, and connections. For increased security, couple Prometheus with Kong’s RBAC to limit access to the metric data to the Prometheus server.

StatsD

Gain insights into your consumers and monitor activity across your Services and Routes with the StatsD plugin. Easily log metrics to a StatsD server or Collectd daemon to unlock rich, real-time insights into requests and responses. Analyze trends and proactively address issues by tracking requests and responses globally, by individual user, and by unique user.

What’s Improved?

Admin GUI

Increase visibility over your consumers with the ability to view which plugins are configured on a consumer

Dev Portal

Better secure your cluster by blocking revoked Dev Portal Users and Consumers at the proxy

Improve performance for your plugins, including:

  • OpenID Connect
  • Forward Proxy
  • Canary
  • LDAP Auth Advanced
  • And much more! Check out the rest here!

What’s Fixed?

For existing Kongers, there are several bug fixes across the dev portal, the admin GUI, and several plugins, including OpenID, Zipkin, LDAP Auth Advanced, and Rate Limiting Advanced. Read the full list here

The post Announcing Kong EE 0.33 – Featuring RBAC, Workspaces, Prometheus Plugin, and More! appeared first on KongHQ.

Kong’s LATAM Partner Briefing

$
0
0

A special thank you to our LATAM Kong Channel Partners! We just shared an amazing week in Santiago, Chile where the team outlined Kong’s technical capabilities, business strategies, and roadmap for continued success. The team also ventured to Sao Paulo, Brazil to host a Meetup and connect with new customers.

It was great spending several days working with our partners. Not only did we share a knowledge transfer of Kong, but we got to meet our incredible partners in person. We view our channel partners as an extension of the Kong team. The time together was an invaluable experience that further strengthened our relationships.

In addition to workshops and training the Kong team also toured the Casas Del Bosque Winery, explored the incredible city of Valparaiso, and shared meals with our new friends.

Casas Del Bosque Winery

Valparaiso

Bao Bar – Partner Dinner Party

 

Meetups in Santiago, Chile and Sao Paulo, Brazil

We were humbled by the incredible turn out for our Meetup events. A special thank you to Launch Coworking in Santiago and Accenture Sao Paulo for hosting our events!

Line out the door to hear about Kong – Thank you Santiago!

Marco Palladino, Kong CTO, presenting Kubernetes Ingress to a packed house in Sao Paulo

Here’s some pictures of our special LATAM Partner Briefing:

Mia Blank, Dir. of Global Channel Sales, presenting the opening session

Beda Yang, LATAM Enterprise Account Executive, presenting sales techniques to partners

Marco Palladino, Kong CTO, presenting the technical track of the partner briefing

We’d like to thank the entire team at Atton el Bosque. The hotel and staff became a wonderful second home to everyone attending the event. Kong is looking forward to building our LATAM team in this incredible location for years to come.

Visit our partners page to learn more about our incredible global partners.

The post Kong’s LATAM Partner Briefing appeared first on KongHQ.

Expose Performance Metrics in Prometheus for any API – Kong CE 0.14 Feature Highlight

$
0
0

As part of our series helping you get up to speed on the new features released in Kong CE 0.14, we want to dive into one of the most important plugins we’ve created to date – Kong’s Prometheus Plugin. Kong is committed to the Open Source Ecosystem, and we’re excited to expand our coverage of the CNCF’s most popular hosted projects.

A key measure of a technical organization’s efficiency is the speed at which performance issues are identified and addressed. With the ever-increasing complexity and pace of innovation in technology today, real-time monitoring and alerting are essential to operating a high performing technical organization. Kong’s new Prometheus plugin empowers users to easily track performance metrics for upstream APIs and expose them in Prometheus exposition format, providing the backbone for implementation of robust monitoring and alerting.

By using a Prometheus Collector to scrape the endpoint on the Admin API, Kong users can gather performance metrics across all their Kong clusters, including those within Kubernetes clusters. Even if your microservice doesn’t have a Prometheus exporter, putting Kong in-front of it will expose a few metrics of your micro-services and enable you to track performance. Once the data is scraped, users can run custom rules across the dataset to pinpoint issues. With effective automation, users can leverage the Prometheus plugin set up workflows to proactively identify, evaluate, and resolve performance issues inside and outside of Kong.

 

The plugin records and exposes metrics at the node-level, however, Prometheus can be used to aggregate metrics across the entire cluster. Kong’s Prometheus plugin currently supports the following metrics:

 

  • Status codes: HTTP status codes returned by upstream services. These are available per service and across all services.
  • Latencies Histograms: Latency as measured at Kong:
    • Request: Total time taken by Kong and upstream services to serve requests.
    • Kong: Time taken for Kong to route a request and run all configured plugins.
    • Upstream: Time taken by the upstream service to respond to requests.
  • Bandwidth: Total Bandwidth (egress/ingress) flowing through Kong. This metric is available per service and as a sum across all services.
  • DB reachability: A gauge type with a value of 0 or 1, representing if DB can be reached by a Kong node or not.
  • Connections: Various Nginx connection metrics like active, reading, writing, and number of accepted connections.

 

You can enable the plugin for your services:

curl http://kong:8001/plugins -d ‘name=prometheus’ -d ‘service_id=<uuid>’

 

Once the plugin is enabled, you can consume your metrics via the following endpoint:

curl http://kong:8001/metrics

 

At Kong, we’re committed to providing our users with the best possible solutions and experience for their API needs. To see the Prometheus plugin in action, check out this recorded webinar. And of course, thank you to our open source contributors, core maintainers ( @thibaultcha @hisham @bungle @kikito) and other Kong Inc. employees who all made huge contributions to this release!

 

Happy Konging!

 

The post Expose Performance Metrics in Prometheus for any API – Kong CE 0.14 Feature Highlight appeared first on KongHQ.

Hello, Atlanta!

$
0
0

At Kong, we have experienced tremendous growth over the past year. Just last year, we moved our San Francisco headquarters to a new, larger office in downtown San Francisco and opened our first offices in England and Mexico. To support our continued growth, we are now adding a second U.S. location (fourth globally) — I’m excited to call Atlanta our new home in the Southeast!

Kong is built on smart, driven people. As we grow and expand, we continue to place a high priority on finding the best talent. Atlanta is a flourishing market for tech talent. In fact, CBRE Research ranks Atlanta as a top 10 market for tech talent professionals. We saw a big opportunity to bring this unique talent to Kong. Through the new Atlanta office, we will now have bi-coastal presence in the U.S., allowing us to better support our customers and partners not only across the Americas but also across EMEA.

We’re very excited to join Atlanta’s tech community. If you’re based in Atlanta, we’d love to have you join us! We’re currently hiring for several sales-related roles. To see details on these open roles and apply, please visit https://konghq.com/jobs/.

 

Brigitte Boyles, Kong’s sales development manager, at our new Atlanta office

The post Hello, Atlanta! appeared first on KongHQ.

Announcing Free Trials for Kong Enterprise Edition 0.33

$
0
0

Want to test drive Kong Enterprise Edition? Now you can.

We’re excited to announce the launch of free trials for Kong Enterprise Edition (EE). For a limited time, you can unlock the full functionality of Kong EE to get a first-hand view of how it can help your organization. We recently released version 0.33 of EE, and we’ve added a host of new features and functionality to improve the way that you deploy, monitor, manage, and optimize your microservice APIs.

New in Enterprise Edition 0.33:

The Kong Enterprise Edition Platform:

Admin GUI

Unleash Kong by using our powerful Admin GUI to execute actions directly into your Kong cluster from a browser. Simplify the management of APIs, consumers, plugins, certificates, and upstream and downstream objects through a graphical user interface. The Kong Admin GUI uses the same RESTful Admin API commands that you’d issue from a command line.

Developer Portal

Take the next step to becoming a global platform with Kong’s API developer portal solution. Seamlessly onboard new developers and deliver an end-to-end branded developer experience. Use the Kong Developer Portal to generate API documentation, create custom pages, manage API versions, and secure developer access.

Analytics

Monitor your Kong health and unlock deep insight into the microservice API transactions traversing Kong. Rich analytics give you full visibility into how your APIs and Gateway are performing. Quickly access key statistics, monitor vital signs, and pinpoint anomalies in real time. View your hit/miss ratio, cache size, and cache settings to optimize Kong performance.

High Availability

Reliably scale microservices applications and APIs with fine-grained traffic control functionality. Use our suite of plugins to achieve more accurate, flexible, and performant rate limiting; combine both regex variables and dynamic transformations for sophisticated routing; and cache content closer to clients for faster responses and reduced network utilization.

Security

Take control with powerful security features designed for Kong, APIs, and Users. Stop worrying about whether the wrong people can access your resources and use Kong’s RBAC, OpenID Connect, and OAuth 2.0 Inspection capabilities to improve your security posture.

Support

Test, stage, and deploy Kong products and services – our Customer Success team looks forward to working with you every step of the way. Kong-trained experts proactively work with your team to anticipate issues and ensure your success with Kong.

Get Started with Kong Enterprise Edition:

Whether you’re a longtime Community Edition user or are new to the Kong platform, our free trials program is the perfect way to see how Kong EE can take your organization to the next level. We look forward to working with you and encourage you to check out our EE documentation for tips on how to get started.

Happy Konging!

The post Announcing Free Trials for Kong Enterprise Edition 0.33 appeared first on KongHQ.


Why I Believe in Kong

$
0
0

Kong Named a Visionary in the 2018 Gartner Magic Quadrant for Full Lifecycle API Management

Right from the start of joining Kong, I knew there was something magical going on here. I believe the passion, intellect, and energy of the team paired with the tailwinds of prolific adoption of modern architectures by the industry is the reason why Kong was recently named a Visionary in Gartner’s April 2018 Magic Quadrant for Full Lifecycle API Management. Gaining favorable recognition from Gartner is a top goal for all technology companies, and in my ~25 years in the software industry, I haven’t seen a company progress as rapidly as Kong on the Magic Quadrant.

kong executive team

The typical pattern I’ve witnessed for companies is that they first make it onto the “Cool Vendors” List, then onto the Magic Quadrant as a Niche Player, then to a Visionary, and then a Leader. So how did Kong jump to a Visionary only three years after being born out of an open source project? I believe the answer is that a new age of modern architectures is upon us, and it requires a new paradigm for handling API.

It’s no secret that architectures are evolving. In recent years the proliferation of microservices, serverless, containers, and kubernetes have fundamentally changed the way in which companies develop, deploy, and improve software. But as is true with most innovations, these new technologies have been hampered by legacy components in other parts of the stack, particularly API gateways and management platforms.

Kong has a focus on enabling all modern architectures at scale. Kong is compatible with all environments and architecture patterns, which allows customers to transition from old monolithic architecture towards the architecture or deployment pattern of their choice without risk of being locked into one vendor or architecture.

This fundamental shift towards modern architectures is a core reason why Kong has experienced such rapid growth with some of the world’s best and largest companies. However, another primary driver of Kong’s success comes from our open source community. We currently have over 20 million downloads of Kong CE, and each user provides tremendous value as an advocate, contributor, or evaluator for the Kong platform. The level of enthusiasm that the community brings is incredible. There’s no greater example of this than a group of our contributors, unbeknownst to us, writing a book about how they see Kong fundamentally changing the API landscape.

While expectations set by our community and industry experts are high, we’re excited to exceed them. And we want to invite you to be a part of the journey. Are you interested in learning about the Why, How, and What of the changing API landscape? At the upcoming Kong Summit industry thought leaders, enterprise customers, community users, and Kong executives will come together to discuss the trends shaping the API landscape and exciting Kong announcements. Reserve your spot today!

 

Gartner Disclaimer

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

The post Why I Believe in Kong appeared first on KongHQ.

Service Mesh – A New Pattern, Not A New Technology?

$
0
0

What is Service Mesh and Where Did it Come From?

Over the past few months, you may have noticed the explosion of industry chatter and articles surrounding service mesh and the future of software architecture. These discussions have been highly polarizing, with tribes forming around specific vendors. While this partisan trend is to be expected, the common thread among these discussions is the rapid transformation of how APIs are used in the enterprise, and what this means for the topology of our traffic.

In a short period of time, service APIs went from being primarily an edge interface connecting developers outside of the organization with internal systems to the glue that binds those internal systems (microservices) into a functioning whole. Consequently, one of the unavoidable results of microservice-oriented architectures is that internal communication within the data center will increase. Service mesh arose as a potential solution to the challenges that arise from increased East-West traffic by providing a different framework for deploying existing technology.  

 

 

As CTO of Kong, and an active participant in these conversations, I have noticed a common misconception about what service mesh is. In the hope of dispelling confusion and advancing discussions, I want to unequivocally state the following: service mesh is a pattern, not a technology.

Service Mesh is a Pattern, Not a Technology

In the same way that microservices are a pattern and not a specific technology, so too is service mesh. Distinguishing between the two sounds more complex than it is in reality. If we think about this through the lens of Object Oriented Programming (OOP), a pattern describes the interface – not the implementation. 

In the context of microservices, the service mesh deployment pattern becomes advantageous due to its ability to better manage East-West traffic via sidecar proxies. As we are decoupling our monoliths and building new products with microservices, the topology of our traffic is also changing from primarily external to increasingly internal. East-West traffic within our datacenter is growing because we are replacing function calls in the monolith with network calls, meaning our microservices must go on the network to consume each other. And the network – as we all know – is unreliable.

What service mesh seeks to address through use of a different deployment pattern are the challenges associated with increased East-West traffic. While with traditional N-S traffic 100ms of middleware processing latency was not ideal but may have been acceptable, in a microservice architecture with E-W traffic it can no longer be tolerated. The reason for this is that the increased east-west traffic between services will compound that latency, resulting in perhaps 700ms of latency by the time the chain of API requests across different services has been executed and returned.

In an effort to reduce this latency, sidecar proxies running alongside a microservice process are being introduced to remove an extra hop in the network. Sidecar proxies, which correspond to data planes on the execution path of our requests, also provide better resiliency since we don’t have a single point of failure anymore. However, sidecar proxies bear the cost of having an instance of our proxy for every instance of our microservices, which necessitates a small footprint in order to minimize resource depletion.

From a feature perspective, however, most of what service mesh introduces has been provided for many years by API Management products. Features such as observability, network error handling, health-checks, etc. are hallmarks of API management. These features don’t constitute anything novel in themselves, but as a pattern, service mesh introduces a new way of deploying those features within our architecture.  

Traditional API Management Solutions Can’t Keep Up

Microservices and containers force you to look at systems by prioritizing more lightweight processes, and service mesh as a pattern fills this need by providing a lightweight process that can act as both proxy and reverse proxy to run alongside the main microservice. Why won’t most traditional API Management solutions allow this new deployment option? Because they were born in a monolithic world.  As it turns out, API Management solutions built before the advent of Docker and Kubernetes were monoliths themselves and were not designed to work effectively within the emerging container ecosystem. The heavyweight runtimes and slower performance offered by traditional API management solutions were acceptable in the traditional API-at-the-edge use case, but are not in a microservices architecture where latency compounds over time via increased east-west traffic activity. In essence, traditional API management solutions are ultimately too heavyweight, too hard to automate, and too slow to effectively broker the increased communication inherent with microservices.

 

 

Since developers understand this, legacy API Management solutions born before the advent of containers have introduced what they call “microgateways” to deal with E-W traffic and avoid rewriting their existing, bloated, monolith gateway solutions. The problem is, these microgateways – while being more lightweight – still require the legacy solution to run alongside them in order to execute policy enforcement. This doesn’t just mean keeping the same old heavy dependency in the stack, it also means increased latency between every request. It’s understandable then why service mesh feels like a whole new category. It’s not because it’s new, but rather because the API Management solutions of yesterday are incapable of supporting it.

Conclusion

When you look at service mesh in the context of its feature-set, it becomes clear that it’s not very different from what traditional API Management solutions have been doing for years for N-S traffic. Most of the networking and observability capabilities are useful in both N-S and E-W traffic use-case has changed is the deployment pattern, which enables us to run the gateway/proxy as a lightweight, fast sidecar container, but not the underlying feature-set.

The feature-set that a service mesh provides is a subset of the feature-set that API Management solutions have been offering for many years, in particular when it comes to making the network reliable, service discovery and observability. The innovation of service mesh is its deployment pattern, which enables to run that same feature-set as a lightweight sidecar process/container. Too often our industry confuses – and sometimes pushes – the idea that a specific pattern equals the underlying technology, as in the case of many conversations around service mesh.

The post Service Mesh – A New Pattern, Not A New Technology? appeared first on KongHQ.

Enabling Tracing with Zipkin – Kong CE 0.14.1 Feature Highlight

$
0
0

Kong recently released CE 0.14.1 to build upon CE 0.14 with several improvements and minor fixes. As part of our series helping you get up to speed on our newest features, we want to dive into another important plugin we’ve created to improve your understanding of your infrastructure – Kong’s Zipkin Plugin.

As organizations move towards microservices, understanding network latency becomes a critical component of ensuring high performance. Each microservice talks to each other via the network and Kong’s new Zipkin plugin allows Kong users to troubleshoot latency problems within their architecture by tracking the duration between API calls.

By using the Zipkin plugin to measure latency, Kong users can identify issues within their services and expedite debugging. When enabled, the Zipkin plugin traces requests in a way compatible with Zipkin – propagating distributed tracing spans and reporting them to a Zipkin server. The code revolves around an opentracing core using the opentracing-lua library to collect timing data for requests in each of Kong’s phases.

The plugin uses opentracing-lua compatible “extractor”, “injector”, and “reporters” to implement Zipkin’s protocols. When a request comes in, the extractor collects information on it, and if no opentrace ID is present, generates a trace ID using a probabilistic model based on the sample_ratio configuration value. When requests are ready to be sent out, the injector adds trace information to the outbound request. When data is collected, the plugin sends a batch to a Zipkin server using the Zipkin v2 API. This plugin follows Zipkin’s “B3” specification as to which HTTP headers to use. It also supports Jaegar-style uberctx- headers for propagating baggage. For instance, we might configure the Zipkin plugin on a Service by making the following request:

$ curl -X POST http://kong:8001/services/{service}/plugins \
--data "name=zipkin"  \
--data "config.http_endpoint=http://your.zipkin.collector:9411/api/v2/spans" \
--data "config.sample_ratio=0.001"

Or we could enable it on a consumer like so:

$ curl -X POST http://kong:8001/plugins \
--data "name=zipkin" \    --data "consumer_id={consumer_id}"  \
--data "config.http_endpoint=http://your.zipkin.collector:9411/api/v2/spans" \
--data "config.sample_ratio=0.001"

Through the use of tracing and Kong’s Zipkin plugin, organizations can easily pinpoint latency issues to optimize performance. As architectures become increasingly complex and services increasingly use the network to communicate, identifying slowdown areas can make a big difference in overall system performance. With Kong, every request can be traced and understood instantly, reducing the time spent on investigating issues and improving workflows.

At Kong, we’re committed to equipping our users with the best possible solutions for their microservice and API needs. To test drive the Zipkin plugin as part of our Enterprise Edition, start your free trial today! And of course, thank you to our open source contributors, core maintainers ( @thibaultcha @hisham @bungle @kikito @james_callahan) and other Kong Inc. employees who all made huge contributions to this release!

 

Happy Konging!

The post Enabling Tracing with Zipkin – Kong CE 0.14.1 Feature Highlight appeared first on KongHQ.

Introducing Kong Support for Service Mesh Deployments

$
0
0

Earlier this month, I shared some thoughts about service mesh as a deployment pattern versus a new technology and why traditional API management solutions can’t keep up with service mesh patterns. I’m excited to announce today that our Kong platform will support service mesh deployments. Users will be able to use Kong as a standalone service mesh or to integrate it with Istio and other service mesh players.

We designed our platform to be lightweight, flexible and deployment-agnostic, allowing users to easily manage increased East-West network traffic and latency within modern, microservice-oriented architectures. Where traditional API management platforms may introduce about 200 milliseconds of processing latency between services in a container ecosystem, we create less than 10 milliseconds of delay. Our plugin architecture also offers a lot of flexibility, enhancing latency performance by removing unnecessary functionality and supporting seamless integrations with ecosystem participants.

While older, traditional API solutions can’t keep up, we enable developers, DevOps pros and solutions architects to succeed in any architecture — old and new.

To learn more about using Kong for service mesh deployments, join me at the Kong Summit on September 18-19 in San Francisco, along with other technologists! We’ll cover more in depth Kong’s service mesh capabilities and discuss the future of service mesh.

The post Introducing Kong Support for Service Mesh Deployments appeared first on KongHQ.

Optimizing the Prometheus StatsD Exporter for Cloud Scale

$
0
0

Kong Cloud has been using StatsD and Prometheus heavily in monitoring and metrics collecting. In this blog post we discuss the use case of StatsD and Prometheus on Kong Cloud, the performance problem we found, and the way we proposed to solve it.

What is StatsD?

StatsD is a metrics server that accepts events from UDP or TCP protocol and export them to various backends. A typical StatsD event looks like:

host.sfo1.request.count:123|c

Every StatsD event is a string in a format of <metricname>:<value>|<type>. The above example represents a metric called host.sfo1.request.count with the type of counter and the value of 123.

On Kong Cloud, we use the StatsD Prometheus exporter in our metrics pipeline to measure KPIs (Key Performance Indicator) of our service. The StatsD Prometheus exporter is a daemon that listens for StatsD events and exports them in Prometheus exposition formats. The exporter has a mapping config that maps the StatsD metric to a Prometheus metric.

In the example below, two StatsD events are translated according to the mapping config to the left.

The Problem

On Kong Cloud, various StatsD events are generated for each request. When the client request rate climbed up to several thousand requests per second, we spotted a high CPU usage of the StatsD exporter that took one and a half cores on an AWS m4.large instance.

To take a closer look, we started doing some profiling upon the StatsD exporter and used the perf tool to sample stacks. This gave us a rough idea of functions taking up most of the CPU time. Then, we used the perf_data_converter tool to convert perf file perf.data to profile.proto and used pprof to analyze the results.

pprof gave us the percentage of CPU time each functions took in descending order:

(pprof) top100
Showing nodes accounting for 311858250000, 83.29% of 374429750000 total
Dropped 505 nodes (cum <= 1872148750)
flat flat% sum% cum cum%
63493500000 16.96% 16.96% 63493500000 16.96% [[kernel.kallsyms]]
48018750000 12.82% 29.78% 48018750000 12.82% regexp.(*machine).onepass
31408750000 8.39% 38.17% 31408750000 8.39% regexp/syntax.(*Inst).MatchRunePos
17480000000 4.67% 42.84% 17480000000 4.67% runtime.duffcopy
17274250000 4.61% 47.45% 17274250000 4.61% regexp/syntax.EmptyOpContext
15302000000 4.09% 51.54% 15302000000 4.09% regexp.(*inputString).step
15244750000 4.07% 55.61% 15244750000 4.07% runtime.mapiternext
9875250000 2.64% 58.25% 9875250000 2.64% runtime.mallocgc
9310250000 2.49% 60.73% 9310250000 2.49% regexp.onePassNext
7658250000 2.05% 62.78% 7658250000 2.05% runtime.memmove
7192750000 1.92% 64.70% 7192750000 1.92% regexp/syntax.(*Inst).MatchRune
6075750000 1.62% 66.32% 6075750000 1.62% runtime.mapassign_faststr
5156750000 1.38% 67.70% 5156750000 1.38% sync.(*Mutex).Lock
4943500000 1.32% 69.02% 4943500000 1.32% github.com/prometheus/statsd_exporter/vendor/github.com/beorn7/perks/quantile.NewTargeted.func1
4722000000 1.26% 70.28% 4722000000 1.26% runtime.nextFreeFast
4608250000 1.23% 71.51% 4608250000 1.23% runtime.scanobject
4420750000 1.18% 72.69% 4420750000 1.18% sync.(*Mutex).Unlock
4281750000 1.14% 73.84% 4281750000 1.14% runtime.mapiterinit
4207000000 1.12% 74.96% 4207000000 1.12% runtime.heapBitsSetType
3840250000 1.03% 75.99% 3840250000 1.03% regexp.(*machine).tryBacktrack
3628250000 0.97% 76.96% 3628250000 0.97% [[vdso]]
3424000000 0.91% 77.87% 3424000000 0.91% runtime.memclrNoHeapPointers
2755500000 0.74% 78.61% 2755500000 0.74% runtime.heapBitsForObject
2732500000 0.73% 79.34% 2732500000 0.73% regexp.(*bitState).push
2695000000 0.72% 80.06% 2695000000 0.72% main.(*metricMapper).getMapping
2112000000 0.56% 80.62% 2112000000 0.56% regexp.(*Regexp).expand
2089250000 0.56% 81.18% 2089250000 0.56% github.com/prometheus/statsd_exporter/vendor/github.com/prometheus/common/model.hashAdd
2047000000 0.55% 81.72% 2047000000 0.55% runtime.growslice
2043500000 0.55% 82.27% 2043500000 0.55% runtime.greyobject
1939750000 0.52% 82.79% 1939750000 0.52% main.(*Exporter).Listen
1877000000 0.5% 83.29% 1877000000 0.5% runtime.makemap

The largest accumulation of events was system calls. This makes sense, as every UDP socket operation involves a system call. What caught our interest was that the total CPU time taken for the Go regular expression engine was around 37 percent of the CPU, which was twice the system calls.

We also tried rebuilding the StatsD exporter from the source using Go 1.10.3, which gave us this result:

(pprof) top 100
Showing nodes accounting for 75230000000, 82.30% of 91412500000 total
Dropped 542 nodes (cum <= 457062500)
flat  flat%   sum%        cum   cum%
17594750000 19.25% 19.25% 17594750000 19.25%  [[kernel.kallsyms]]
16792500000 18.37% 37.62% 16792500000 18.37%  regexp/syntax.writeRegexp
9554250000 10.45% 48.07% 9554250000 10.45%  regexp.onePassCopy
6012750000  6.58% 54.65% 6012750000  6.58%  regexp.cleanupOnePass
5165000000  5.65% 60.30% 5165000000  5.65%  runtime.sigaltstack
4222250000  4.62% 64.92% 4222250000  4.62%  regexp.(*Regexp).allMatches
2703500000  2.96% 67.87% 2703500000  2.96%  regexp.compile
1195250000  1.31% 69.18% 1703500000  1.86%  runtime.mallocgc
1159250000  1.27% 70.45% 1204500000  1.32%  runtime.mapiternext
1112500000  1.22% 71.67% 1112500000  1.22%  runtime.settls
1038500000  1.14% 72.80% 1038500000  1.14%  sync.(*RWMutex).Unlock
912750000     1% 73.80%  912750000     1%  runtime.mapassign_faststr
853000000  0.93% 74.73%  853000000  0.93%  regexp.(*machine).step
840000000  0.92% 75.65%  869750000  0.95%  sync.(*WaitGroup).Wait
814500000  0.89% 76.54% 1154750000  1.26%  runtime.greyobject
780000000  0.85% 77.40%  780000000  0.85%  runtime.largeAlloc
735000000   0.8% 78.20%  735000000   0.8%  [statsd_exporter]
598750000  0.65% 78.86%  598750000  0.65%  runtime.(*itabTableType).(runtime.add)-fm
588500000  0.64% 79.50%  593250000  0.65%  runtime.heapBitsSetType
576750000  0.63% 80.13%  576750000  0.63%  regexp.makeOnePass.func1
513250000  0.56% 80.69%  513250000  0.56%  [[vdso]]
495250000  0.54% 81.23%  515000000  0.56%  crypto/cipher.NewCTR
491000000  0.54% 81.77%  495500000  0.54%  runtime.mapdelete_fast32
480750000  0.53% 82.30%  480750000  0.53%  runtime.memmove

Go 1.10.3 definitely had a better performance in the framework itself and gave us 19 percent CPU time in system calls and 40 percent CPU time in the regular expression engine.

Finding a Solution

Since the largest portion of CPU was used in regular expression matching, we started to look into the source code of the StatsD exporter. The exporter uses regular expression to match rules and expand labels from matching groups. For example, the mapping config we see in the first section generates the following regular expressions:


All labels will be expanded using the regex.ExpandString function.

After reviewing the rules we used on Kong Cloud, it turned out that we didn’t actually need the full power of regular expressions because:

  1. We always use the .*. pattern to match the whole field, which is separated by dot. There’s no use case in which we need to use complex expressions like host.*.status.\d+.
  2. We always use only one capture group as a label. There’s no use case like host: "$1_$2".

Based on this observation, we started to refactor the StatsD exporter with a light-weight matcher with limited features that was just enough to suit our use cases. Then we implemented a simple matching type in addition to the glob and regex matching type using a finite state machine to mock the behaviour of regular expression matching.

Simple Matcher Preparation

Every time the mapper rule is reloaded, the exporter will build a state machine following these steps:

  1. Read rules from yaml and split each rule by dot.
  2.  Build a state machine using each split field. Each field represents a state in the state machine. For example, the following rules–
mappings:
- match: kong.*.*.*.request.count
  name: "kong_requests_proxy"
  labels:
    client: "$1"
    job: "kong_metrics"

- match: kong.*.*.*.status.*
  name: "kong_status_code"
  labels:
    client: "$1"
    service: "$3"
    status_code: $4
    job: "kong_metrics"

- match: kong.*.*.*.kong_latency
  name: "kong_latency_proxy_request"
  labels:
    client: "$1"
    job: "kong_metrics"

- match: kong.*.*.*.upstream_latency
  name: "kong_latency_upstream"
  labels:
     client: "$1"
     job: "kong_metrics"

- match: kong.*.*.*.cache_datastore_hits_total
  name: "kong_cache_datastore_hits_total"
  labels:
     client: "$1"
     job: "kong_metrics"

- match: kong.*.*.*.cache_datastore_misses_total
  name: "kong_cache_datastore_misses_total"
  labels:
    client: "$1"
    job: "kong_metrics"

- match: kong.*.node.*.shdict.*.free_space
  name: "kong_shdict_free_space"
  labels:
    client: "$1"
    node: "$2"
    shdict: "$3"
    job: "kong_metrics"

- match: kong.*.node.*.shdict.*.capacity
  name: "kong_shdict_capacity"
  labels:
    client: "$1"
    node: "$2"
    shdict: "$3"
    job: "kong_metrics"

— will build a state machine as follows:

 

  1. For labels, replace regex expansion variables with `%s` and record the regex variable stored in struct labelFormatter. For example, client_$1 becomes:
type labelFormatter struct {
    captureIdx int
    fmtString string
}

Simple matcher matching

  1. When a StatsD event comes in, split each event with dot.
  2. For each split field, do a traversal through the state machine. Each lookup of the next transition in state uses the go map structure, thus is O(1) time complexity each time.
  3. If the current transition state is a *, also put the current field to an array.
  4. * always matches but has a lower priority than exact match. For example, client.abc will always match rule client.abc before client.* regardless of the order of occurrence in the statsd.rules file.
  5. If the state ends in a matched state and there’s no split field left, this is a successful match. o to step 4; otherwise fall back to glob or regex matching.
  6. Format labels using the captured groups stored in the array.
  7. Return the matched rule and the formatted labels to the exporter and go to step 1.

Performance Comparison

We reran the perf and pprof profiling again with the workload with simple matching type enabled and received the following:


(pprof) top 100
Showing nodes accounting for 60068250000, 71.04% of 84558750000 total
Dropped 480 nodes (cum <= 422793750)
flat  flat%   sum%        cum   cum%
25372500000 30.01% 30.01% 25372500000 30.01%  [[kernel.kallsyms]]
3718750000  4.40% 34.40% 3718750000  4.40%  runtime.memmove
2691000000  3.18% 37.59% 4659000000  5.51%  runtime.mallocgc
2037750000  2.41% 40.00% 2161500000  2.56%  runtime.mapassign_faststr
2019000000  2.39% 42.38% 2166500000  2.56%  runtime.heapBitsSetType
1964500000  2.32% 44.71% 2003500000  2.37%  runtime.mapiternext
1556250000  1.84% 46.55% 1556250000  1.84%  runtime.nextFreeFast (inline)
1369000000  1.62% 48.17% 1587750000  1.88%  runtime.scanobject
1263000000  1.49% 49.66% 1263000000  1.49%  github.com/beorn7/perks/quantile.NewTargeted.func1
1258750000  1.49% 51.15% 1848750000  2.19%  regexp.(*machine).tryBacktrack
1255000000  1.48% 52.63% 1255000000  1.48%  runtime.memclrNoHeapPointers
1217250000  1.44% 54.07% 1255250000  1.48%  runtime.mapaccess2_faststr
1145000000  1.35% 55.43% 1145000000  1.35%  [[vdso]]
1063500000  1.26% 56.68% 1063500000  1.26%  runtime.aeshashbody
940500000  1.11% 57.80%  940500000  1.11%  main.(*metricMapper).getMapping
870250000  1.03% 58.83%  870250000  1.03%  regexp.(*inputString).step
802500000  0.95% 59.77%  802500000  0.95%  regexp/syntax.(*Inst).MatchRunePos
796750000  0.94% 60.72%  815750000  0.96%  runtime.mapaccess1_faststr
788250000  0.93% 61.65%  788250000  0.93%  github.com/prometheus/common/model.hashAdd
784250000  0.93% 62.58%  784250000  0.93%  main.(*Exporter).Listen
696750000  0.82% 63.40%  749750000  0.89%  runtime.heapBitsForObject
695000000  0.82% 64.22%  695000000  0.82%  strings.genSplit
675000000   0.8% 65.02%  675000000   0.8%  syscall.Syscall6
615750000  0.73% 65.75%  615750000  0.73%  regexp.(*machine).backtrack
560000000  0.66% 66.41%  603750000  0.71%  github.com/prometheus/common/model.LabelsToSignature
521500000  0.62% 67.03%  521500000  0.62%  runtime.unlock
510750000   0.6% 67.63%  510750000   0.6%  runtime.lock
482250000  0.57% 68.20%  588250000   0.7%  runtime.mapiterinit
468250000  0.55% 68.76%  468250000  0.55%  github.com/beorn7/perks/quantile.(*stream).merge
454250000  0.54% 69.29%  454250000  0.54%  unicode/utf8.ValidString
427500000  0.51% 69.80%  427500000  0.51%  runtime.indexbytebody
374500000  0.44% 70.24%  428250000  0.51%  runtime.growslice
337500000   0.4% 70.64%  590000000   0.7%  regexp.(*bitState).push (inline)
335500000   0.4% 71.04%  445000000  0.53%  runtime.greyobject

 

StatsD Exporter version Syscall CPU percentage (prorated) Time taken to finish 100000 mapping iterations
Stock Binary 20.36% N/A
Go 1.10.3 23.39% 1.655s
Our Version 42.23% 1.003s
+19% -39%

 

The CPU percentage in Syscall is larger the better under the same workload.

Now, we have less than five percent of CPU time spent in Go’s regular expression library. Those are the functions called in Prometheus client_golang library.

We also ran a test which iterated the matching function only for 100,000 times, and we had 40 percent less time compared to glob matching type. If glob matching is not used as a fallback when simple matching can’t find a match and is completely removed, we had  60 percent time less spent in iteration.

Simple Matcher Caveats

There are a few caveats with simple matcher:

  • Rules that need backtracking:


client.*.request.size.*
client.*.*.size.*

The above will fail to match if the event is client.aaa.response.size.100
Correct rules are:

client.*.response.status.*
client.*.response.size.*

  • Rules that depend on the order of occurrence in statsd.rules (possible to use array to trade with performance if needed)
  • Labels that have multiple regex expansion variables (possible to support if needed)

Improvements for the Future

After optimizing overhead introduced by the Go regex engine, most of the CPU time is now spent in system calls and Go runtime. On Kong Cloud, we use the StatsD Advanced Plugin to batch UDP packets for each request to significantly reduce the amount of system calls. This benefits both the sender(Kong instances) and the receiver(the StatsD exporter).

Kong Cloud delivers faster innovation through zero-touch, automatic updates and cloud-speed cadence of new product functionality. It offers the unique flexibility of running on any cloud option, including AWS, Azure and Google Cloud. Kong Cloud is currently in beta and will be available to the general public soon. To learn more about Kong Cloud, sign up for product updates at https://konghq.com/cloud/.

The post Optimizing the Prometheus StatsD Exporter for Cloud Scale appeared first on KongHQ.

Join the First-Ever Kong Community Call

$
0
0

We’re excited to announce that we’re starting the Kong community call on September 25th at 10 a.m. Pacific Time, and we’d love for you to join us!

What is the community call?

The call is a place to highlight community contributions to Kong, give you a chance to ask Kong engineers about new features, and chat about what matters to you! We’ll have a call every month, and we’ll be kicking it off with a Kong Summit recap, a demo of a community-contributed plugin from Optum Health, and a Q&A with Kong Principal Engineer Thibault Charbonnier.

If you’re interested in presenting your work with Kong on a future call, please add your topic and contact info to the future topics section of the agenda Google doc

How do I join?

The calls will be hosted on Zoom, and you can find the link to join in the agenda doc, along with the times and topics for future meetings. The doc is open for editing, so if there’s anything you’d like to add, feel free.

If you’d like a Google calendar invite for each month’s call, add your email address to the agenda doc.

We’d love your feedback!

The Kong community call is for anyone who uses or wants to learn about Kong, and we’d love your feedback, questions, suggestions, and participation! Get in touch with me about the call or any other Kong community topics at judith@konghq.com.

The post Join the First-Ever Kong Community Call appeared first on KongHQ.

Announcing Kong 1.0

$
0
0

Nearly four years ago, we open-sourced Kong and made it available to the world. Since then, Kong has been downloaded over 45 million times, been deployed in production at some of the largest companies and government agencies worldwide and attracted 110 open source contributors. Today, we’re proud to announce a critical milestone in our journey Kong is going 1.0!

If you are familiar with Kong you might be asking, “Is Kong Community Edition (CE) or Kong Enterprise Edition (EE) going 1.0?” The answer is Community Edition, and that along with the release we’re changing the name of Kong CE to just plain Kong. Kong EE will now be called Kong Enterprise. The rename reflects the relationship that Kong and Kong Enterprise have always had to each other; Kong is the fully featured and production-grade foundation of Kong Enterprise.

What’s new in 1.0?

Kong 1.0 signals the maturity of the project and also includes major feature updates and minor enhancements created in response to overwhelming community demand.

Support for Service Mesh

Kong 1.0 provides support for Service Mesh deployment patterns enabled by the addition of mutual Transport Layer Security (TLS) between Kong instances, and modifications to the plugin run loop. These changes allow Kong to be deployed alongside each instance of a service, brokering information flow between services and automatically scaling as those services scale.

Migrations

The second enhancement in Kong 1.0 is a new Database Abstraction Object (DAO), which eases any necessary migrations from one database schema to another with minimal to zero downtime when you upgrade to a new version of Kong. The new DAO also allows users to upgrade their Kong cluster all at once, without requiring manual intervention to upgrade each node sequentially.

Other Updates and Improvements

Small but extremely popular improvements include a name property for routes and the addition of HTTPS health checks.

Why 1.0 now?

People often ask us why we hadn’t already labeled Kong 1.0. For most projects 1.0 means “ready for production”. Kong, however, has been in production at scale with leading companies and organizations across the world for several years. For Kong, 1.0 means that our API is established and backward compatible, and that future improvement will add to (rather than change) current functionality.

 

 

The release of the Plugin Development Kit (PDK), removal of leftover API entities, and the new 1.0 capabilities below allow us to guarantee that Kong is not only stable and production ready, but that we are ready for the next phase in our journey.

What’s Next?

We are all very excited about the progress that we have made to date, and are looking forward to the next decade of platform innovation driven by the Kong community, our fellow Kongers, and the ever-growing list of Kong users who put Kong to the test every day. We are incredibly grateful to everyone involved so far and look forward to building the future of Kong with you.

Download Kong 1.0.0rc1 and put it through its paces in preparation for GA. In the meantime join us for our first-ever community call on September 25th where you will be able to ask Principal Engineer Thibault Charbonnier your questions about 1.0.

The post Announcing Kong 1.0 appeared first on KongHQ.


Welcome to the New Kong Hub

$
0
0

I’m delighted to introduce you to the Kong Hub, the best place to find and share things that make Kong even better! It is now quick and easy to discover Kong plugins and integrations — and to share your Kong creations with the world.

The Kong Hub serves two main purposes:

  1. If you are a Kong Admin, the Kong Hub is the single best place to discover all the amazing things you can add to Kong to make Kong even more helpful in solving your problems. From plugins that are bundled with Kong, to plugins created and shared by community members, to integrations that make Kong easier and better to use with the systems you already use, you’ll find an ever-growing list of extensions (which is the generic term we use to refer to things in the Kong Hub).
  2. If you created a Kong Plugin, or documented how to integrate Kong with a particular service or package, or extended Kong in some other way, the Kong Hub is the single best place to share your contribution with the global Kong Community. Adding your extension to the Kong Hub is easy, and doing so will ensure it gets discovered by fellow Kong users that can benefit from it. The Kong Hub is viewed more than 20,000 times each month and is typically the second most popular page on our entire website.

To explore the Kong Hub, simply visit https://docs.konghq.com/hub. As of today, you’ll find all the Kong Inc.-published plugins you know and love, plus a handful of plugins and integrations from a variety of contributors, including:

You can also look forward to regular additions to the Kong Hub — we’ll be covering new listings via Twitter, in our email newsletter (subscribe at the bottom of this page), on this blog, in webinars, on Kong Community Calls, etc.

Have you created a Kong integration or plugin that you would like to expose to the widest possible audience? Adding your listing to the Kong Hub is straightforward. The Kong Hub and all of the docs.konghq.com website are what is called “docs as code” — we maintain documentation using the same tools and processes that we use to maintain our software code. Thus, to add your listing to the Kong Hub, you’ll need to:

  1. Fork the https://github.com/Kong/docs.konghq.com repo
  2. Duplicate the sample index.md file, put it in a correctly-named folder and fill in the blanks
  3. Commit your change, then PR your addition back to https://github.com/Kong/docs.konghq.com
  4. Your contribution will get reviewed by Kong staff — we’ll work with you to get it finalized, then listed on the live https://docs.konghq.com/hub
  5. Going forward, as new versions of your extension are released and/or as new major versions of Kong and Kong Enterprise are released, you are encouraged to update your Kong Hub listing — or the community might help out and update it!

The Kong Hub contribution process is documented in detail, along with an invitation to schedule a quick call with me — we can discuss your plans for a new Kong integration, I can answer questions about listing your extension, etc. You are also invited to join the Kong Hub discussion in Kong Nation.

The post Welcome to the New Kong Hub appeared first on KongHQ.

Kong Summit 2018 Highlights

$
0
0

On September 18-19, 2018 we had our first-ever conference, Kong Summit! Several hundred attendees joined us at The Pearl in San Francisco to hear some amazing product announcements, learn about industry trends like microservices and serverless, and hear how fellow community members and customers are integrating Kong with other cloud-native projects including Kubernetes, Prometheus, and Terraform.

We have recorded all of the talks and our team is working hard on making them available very soon. Sign up for updates at the footer to be notified when they become available. In the meantime, enjoy the recap video below!

Also, we’ve summarized the major announcements from the Summit, and ways to keep in touch with the community.

 

The end of API Management

“API management is dead!” Augusto Marietti proclaimed in his opening Keynote. Microservices are proliferating at an ever-increasing pace and their APIs can’t be managed individually anymore; there are just too many of them. Added to this complexity, the rise of streaming analytics means that most useful data is in flight, being passed from one service to another to deliver real-time results. Introducing latency into streaming systems has dire consequences since data can get stale within seconds of its collection.

These two factors, combined with the fact that companies usually run microservice-heavy, real-time applications on mixed infrastructure (both in the public cloud and on-premises) mean that a new type of tool is required to manage the flow of information within applications. In the opening keynotes, Kong Inc. announced its vision to provide this new tool, a service control platform, to broker the flow of information that is constantly in flight between ever-proliferating microservices on hybrid infrastructure.

Product Announcements

The major product announcements at Kong Summit  included:

  • Kong 1.0 – This milestone marks the maturity of our open source offering and a solidification of our API. We promise that changes in 1.X.x will be additive rather than modifying existing functionality.
  • Service Mesh – This deployment pattern is enabled in 1.0 by the addition of mutual Transport Layer Security (TLS) and modifications to the plugin run loop.
  • Kong Cloud – Kong Enterprise will soon be available as a hosted service for teams that want to focus on business value rather than operations.

Summit also included a keystone announcement for Kong’s artificial intelligence (AI) capabilities. Kong CEO, Augusto Marietti, announced that Kong’s service control platform will leverage AI to better understand service connections and communications. Kong VP of Engineering, Geoff Townsend, treated the Kong Summit audience to demos of Kong’s upcoming AI-fueled products. Check out the first products around Machine Learning (ML) and Security and sign up for a private beta.

Use Case Examples

During Kong Summit, we heard some great stories from Kong users and customers. McAfee told us about their use of Kong and contributions to the open source community. Cargill told us about their team uses Kong to route traffic in Kubernetes. Zillow open-sourced the Terraform module that they use to deploy Kong into AWS. And, Yahoo! Japan outlined how they run Kong to provide high availability in the face of natural disasters including earthquakes.

We’d like to extend a huge thank you to all the users who shared their use cases during the summit! If you are doing something cool with Kong in production we’d love to hear from you too! Please contact the Kong marketing team; we’d love to signal-boost your story!

Sign Up for Talk Recordings

All the talks at Kong Summit were recorded, and we’ll be sharing them with everyone once the recordings are processed! If you would like to be alerted when new recordings come out, you can sign up at the bottom of this page to get them sent straight to your inbox.

Stay in touch!

You can stay in touch with the Kong community by joining our monthly community call, or attending our upcoming webinar on our company’s vision and what’s enabled by Kong 1.0.

The post Kong Summit 2018 Highlights appeared first on KongHQ.

Community Call Starts Out Kong Strong

$
0
0

Big thanks to the 37 people who joined us on the first-ever Kong Community Call last week! The inspiring demo, interesting discussion and enthusiastic participation have me looking forward to our next call on October 16 and the other monthly calls moving forward! As an evangelist for Kong, it makes me really happy to see such great participation from everyone who came to the call looking to learn or share their knowledge with the rest of the community.

Ross Sbriscia and Jeremy Justus gave a wonderful demo of the Spec Expose plugin that they built for Optum. They also had some insightful points of view about the evolving role of open source software in enterprises and the experience of working with open source vs. proprietary software.

Thibault Charbonnier, in addition to our planned conversation of Kong 1.0 changes, gave a great explainer of the debugging resources available for Kong plugins, both in Kong itself and in the Plugin Development Kit (PDK). The group also discussed the product announcements and sessions at Kong Summit.

All the community calls are recorded, so if you weren’t able to join us for the first one, you can catch up with the recording below and ask any follow-up questions or continue the discussion on Kong Nation!

 

Our next call is scheduled for October 16 at 10 a.m. PT, and I hope you’ll join us! In honor of Hacktoberfest, Cooper Marcus will give an introduction to contributing to the Kong docs, and we will have another plugin demo — this time from a partner.

If you’d like to attend the meeting, get the latest schedule details, or propose a topic for a future call, visit our open Google doc agenda. To get a Google calendar invite for future calls, please add your email address to the very bottom section of the doc, and I will add you!

Hope to see you soon!

The post Community Call Starts Out Kong Strong appeared first on KongHQ.

Try Kong on Kubernetes with Google Cloud Platform

$
0
0

The best way to learn a new technology is often to try it. Even if you prefer reading docs, hands-on experimentation is an ideal accompaniment to written instructions. Today I’m happy to announce the fastest and easiest way to try Kong on Kubernetes — the new Kong Kubernetes App on the Google Cloud Platform (GCP) Marketplace.

Try Kong on GKE

Google Cloud recently announced the ability to quickly deploy containerized applications to Google Kubernetes Engine (GKE) from the GCP Marketplace. Those same apps are just as easy to deploy to Kubernetes clusters running anywhere, not just on Google Cloud Platform.

A few developments make GCP + Kong the fastest way to try Kong on Kubernetes:

  1. GCP has a free tier with credits for first-time users
  2. Once you have a GCP account, deploying a Kubernetes cluster takes just one click (or one command, for those that prefer the CLI)
  3. Once you have a Kubernetes cluster, deploying Kong to that cluster takes only a few more clicks

The following video is a walk-through of the process starting with step #2 and ending with tearing down the Kubernetes cluster to preserve your Google Cloud Platform credits.

Other ways to try Kong on Kubernetes

Kong’s install page for Kubernetes presents many other options for deploying Kong on Kubernetes. If you prefer a Helm chart, manifest files, Minikube, an ingress controller, or even Kong Enterprise, we’ve got you covered!

Find help and contribute

If you run into trouble or have improvements to suggest, we welcome your feedback on Kong Nation! You can also contribute to the project via Pull Requests on our documentation repo or the new repo for Kong on Kubernetes via GCP Marketplace.

The post Try Kong on Kubernetes with Google Cloud Platform appeared first on KongHQ.

Designing a Metrics Pipeline for SaaS at Scale: Kong Cloud Case Study

$
0
0

In this blog, the Kong Cloud Team shares their experience building the metrics infrastructure that supports the Kong Enterprise service control platform as a hosted managed service in the cloud.

Kong is a popular open source API platform, which has been widely adopted around the world. Here at Kong Inc., we believe in building performant products. Kong itself is proof of that, and Kong Cloud is no exception. We’ve been making progress toward developing a cloud provider-agnostic, scalable, hosted and managed service control platform with all the latest and greatest Kong Enterprise features. It delivers the value you would expect from a Kong-as-a-Service solution, letting you focus on building business value instead of maintaining infrastructure.

Monitoring and logging for Kong Cloud’s proof of concept

Kong Cloud is a software-as-a-service (SaaS) offering designed to handle web-scale traffic using modern cloud software. Running a SaaS is not only about running your software itself but also running all the necessary tooling to provide operational observability and efficiency so that you can keep your software up and running well for your customers.


Figure 1: The first iteration of our metrics and logging infrastructure.

We use the rate, error and duration (RED) method to ensure that our cloud services are meeting the service level objectives (SLOs) we’ve set. All traffic entering our network goes through an edge layer – our first layer of defense – where we do TCP terminations, collect metrics, perform logging and tracing and implement firewalling. Telemetry data (type, latency, status codes, errors encountered) for each incoming or outgoing request is recorded, indexed and analyzed using a time-series database and distributed tracing.

An ELK stack and Prometheus make up the logging and monitoring infrastructure respectively. Filebeat forwards all the logs from our edge layer instance. In the first iteration of our metrics pipeline, Logstash groked most of our Service Level Indicator (SLI) metrics and indexed them into Prometheus via StatsD events. Prometheus then alerted our Site Reliability Engineers (SREs) on any anomalous behavior via Alertmanager.

We noticed a problem with our initial design

This seemed like a good first stepping stone, but our stress tests said otherwise. When we compared metrics from different sources, we noticed that they didn’t always agree.


Figure 2: Inconsistent results.

Figure 2 shows the inconsistency between the metrics we collected from different sources during a stress test that we initiated. The top half of the figure shows the requests per second (RPS) at the edge layer, while the bottom half shows the number of TCP connections at the edge layer, which were collected using the Prometheus Node Exporter.

We initiated a burst of requests beginning around 17:55 that ended at 18:15, but the metrics pipeline shows that RPS climbed up from 18:00 and ended later than 19:00. This was a stress test during which our metrics dashboard was lying out loud. We could say this with certainty as the source of traffic was a wrk process in our control. The metrics we observed on the client side did not match the metrics we stored.

We started to look into our bottlenecks and quickly realized this was a bigger problem.

The lift-off approach

The root cause of our problem was our design for getting metrics into Prometheus. To gather metrics, we tweaked Logstash nodes in the ELK pipeline to mutate numeric values present within the log document (latency values, HTTP status codes, etc) into integer values and render StatsD events. The StatsD events were then sent to a Prometheus exporter which took them in, aggregated them and exposed them in Prometheus Exposition Format for Prometheus to consume and alert on.

This dependency of the metrics pipeline on the logging pipeline meant that metrics would be incorrect if the ELK stack had any failures/bottlenecks. Our simple naive design had helped us get off the ground as quickly as possible while we tested our hypothesis on the business and technical side but was not suited for production traffic.

Logs are not the same as metrics

Astute readers might have already noticed the problem with our first approach — logs != metrics. Parsing logs can be expensive and slow. When metrics are derived from logs, they are only available once the logs have been processed. This defeats the purpose of these metrics, since we need them to give us information about the real-time SLIs for our service. For our use-case starting out, it was acceptable for logs to be processed at a slower rate during surges. We never expected them to be real-time. This initial design — re-using a logging pipeline — reduced operational and development cost while we started to set up our infrastructure. It worked well to bootstrap a cluster of services for Kong Cloud but quickly reached its limits (hence, this blog post).

All shortcuts failed

Although we were aware that our metrics pipeline had to be rethought, we spent a little time to see how much we could get out of the initial design using operational and configuration changes.

Improve StatsD since it was the bottleneck

The first pass at stress testing our metrics pipeline highlighted what appeared to be a CPU bottleneck within the StatsD exporter. We spent some time studying and optimizing the exporter but still found that Key Performance Indicator (KPI) data reported by Prometheus did not align with actual traffic patterns. This nevertheless was a huge improvement for us since we still rely on some metrics associated with certain kinds of logs.

Optimize logging Infrastructure

Batching in Filebeat

After making sure that we were not saturating network links, we tuned our Filebeat configuration to make sure that we are batching logs to ship them efficiently over to Logstash nodes.

This further improved the indexing rate but there was not much that we could optimize here. The side effect of this was the metrics emitted by Logstash then spiked up and filled up the UDP buffer at StatsD exporter side periodically.

Improve ES indexing rate

We probably don’t need to get into a discussion of how Elasticsearch is a beast to just run and maintain and scale. We tuned up our instances to make sure we had a good balance of CPU and IO and had enough fire in the cluster to index at a moderate rate. We also tuned up Logstash to do aggressive batching and over-provisioned it, so that wasn’t a bottleneck.

It should be noted that all these improvements gave more than 100 percent improvement in the metrics being reported, but that was not anywhere near to what we expected our traffic rate to be. We had to improve metrics pipeline to record metrics at several thousand RPS to several hundreds of thousands RPS (with a possibility to exceed 1M+ RPS).

Since we knew we weren’t pursuing the right direction, and that optimizing wouldn’t solve the problem at hand, we needed a new homegrown design.

Our design goals

We knew that we needed to rethink how we collected our metrics at the edge. In addition to ensuring correctness and real-time delivery of our metrics, we wanted to solve a few other problems to make our infrastructure easier to scale and make it more resilient. The following are some key design goals we set out to achieve to solve our problems. These are listed in the order of their significance:

Least-possible impact on latency

Kong is designed to serve requests at high throughput and extremely low latencies. Kong adds less than a millisecond to the latency for most requests. When your core product doesn’t add much latency, there is not a lot of scope for any overhead in other supporting services. All our metrics had to be collected with the least amount of impact on latency possible.

Scalability

In our previous solution, most of the pipeline had to be scaled up linearly as the amount of traffic increased. This was a major operational and cost burden. The new design should not be linearly proportional to the traffic rate. It should scale horizontally along with our edge-tier. One could scale Elasticsearch to a million indexing rate but that’s a cost we are not ready to pay in terms of infrastructure and keeping people around whose sole job is running Elasticsearch.

Minimize blast radius

Separation of our logging and metrics infrastructure and pipeline was another key goal for us. Specifically, a degradation or failures within the logging pipeline should have no impact on the metrics pipeline, and vice versa. With a microservices architecture, it is easy to end up in a dependency hell. We still actively try to keep blast radii of all components to as small as possible.

Touchdown!

While we were balancing all of the above concerns, we came up with a few ideas about how we could architect the new solution. We spent time thinking about running Logstash parser only processes alongside NGINX, developing a plugin for Logstash to export metrics more efficiently but these all seemed like operational/configuration hacks as well, which didn’t address the root problem. We had to record the metrics in the application itself — in this case, NGNIX — for a comprehensive picture and a performant solution.

The solution that finally worked came out of Kong itself. We realized that we could use OpenResty (Kong is built on top of OpenResty) to solve all of these problems. We could write Lua code to simplify the data flow by directly sending metrics to Prometheus and avoiding the expensive and slow log parsing and StatsD events.


Figure 3: Our rearchitected metrics pipeline.

To implement our solution we replaced NGINX with Openresty on the edge tier and used the lua-nginx-module to run Lua code that captures metrics and records telemetry data during every request’s log phase. Our code then pushes the metrics to a local aggregator process (written in Go) which in turn exposes them in Prometheus Exposition Format for consumption by Prometheus. This solution reduced the number of components we needed to maintain and is blazingly fast thanks to NGINX and LuaJIT. We will be doing a deep dive into the design and code of our solution in the next blog post so stay tuned for that!

The above solution not only met all our design goals but opened up a gateway to monitor our edge tier itself much better.

Show me the data!

Figure 4 is the same graph as Figure 1 — only now we can see that the metrics reported by our pipeline are same as what we see on the client side.


Figure 4: Consistent results.

Conclusion

Our solution has been running in production for over two months. Our logging infrastructure frequently lags behind in indexing but we are extremely satisfied with the responsiveness of our metrics pipeline.

The code we designed and optimized is not currently generic enough to use for other applications; it caters to our very specific needs and is tightly coupled with how we handle multi-tenancy in Kong Cloud. But our approach itself can be ported for other use cases running at scale.

Instrumenting applications with metrics infrastructure is an investment that pays for itself pretty quickly with the observability that it provides. It should be done early on rather than as an afterthought.

This post is the first in a three-part blog series around Kong Cloud’s metrics pipeline design and implementation. In part two, we will dig into the meat of our implementation and describe how we aggregated metrics with a combination of push and pull approaches to get the best of both worlds.

Till then, may your pager be quiet!

This post was written by the Kong Cloud Team: Harry (@hbagdi), Robert (@p0pr0ck5), Wangchong (@fffonion), and Guanlan (@guanlan)

The post Designing a Metrics Pipeline for SaaS at Scale: Kong Cloud Case Study appeared first on KongHQ.

Viewing all 463 articles
Browse latest View live