Quantcast
Channel: Blog – KongHQ
Viewing all 463 articles
Browse latest View live

6 Strategy Elements for Building Cloud Native Applications

$
0
0

The cloud native paradigm for application development has come to consist of microservices architecture, containerized services, orchestration, and distributed management. Many companies are already on this journey, with varying degrees of success. To be successful in developing cloud native applications, it’s important to craft and implement the right strategy. Let’s examine a number of important elements that must be part of a viable cloud native development strategy. For a deeper dive into cloud native, check out our eBook Architecting the Future: Cloud Native Applications.

1. Prepare Well to Transition to Cloud Native

The first step in a successful transformation is to make a plan. Many organizations don’t even get moving in the right direction because they begin with the technology. While new technology can be exciting, it can also be daunting. Otherwise highly beneficial technology can be misused to the point of frustration and abandonment.

At the outset, it’s critical to involve your leadership, partners, and customers. Present your findings and high-level plans. Assemble the right team and work together to divide your cloud native journey into phases. Then, break these phases into development projects, sprints, and actions. Set clear expectations and frequently collect feedback.

Resist the temptation to pursue the technology before you align your business mission, vision, and people with your cloud native aspirations.

2. Transition Away from Silos to DevOps

Despite the prevalence of agile methodology, application development is still commonly organized into these silos:

  • Software development
  • Quality assurance and testing
  • Database administration
  • IT operations
  • Project management
  • System administration
  • Release management

This arrangement enables specialization for managing staff who perform the work in each area. Typically, these silos have different management structure, tools, methods of communication, vocabulary, and incentives. These differences correspond to disparate views regarding the mission and implementation of the application development effort.

DevOps is a both a methodology and an organizational structure. It aims to break silos open and build a common vocabulary, shared toolsets, and broader channels of communication. The goal is to cultivate a culture that intensely focuses on frequent releases of high-quality deliverables. DevOps replace heavy procedures and unnecessary bureaucracy with autonomy and accountability.

3. Move from Waterscrumfall to Continuous Delivery

Today, many agile teams find themselves immersed in what Dave West calls the waterscrumfall. Yes, it’s good to embrace Agile principles. Too often, however, the organization does not. On many agile teams, the result of each iteration is not actually a production-grade deliverable. Incidentally, this is the original intent of the Agile Manifesto principle of working software.

What is more common is that the new code is merely a batch that gathers together with other batches downstream. This closely resembles the conventional waterfall model. This apparent reversion to conventional development actually diminishes two key benefits of agile delivery. Firstly, customers go several weeks without seeing any addition to the value of the application under development. Secondly, the development team endures the same period of time without receiving any truly valuable feedback.

To develop cloud native apps and realize the benefits of cloud-native architectures, it’s necessary to make complete a shift to continuous delivery. In CD, application changes are deployed automatically—several times a day. Some teams are having much success, but only because they have built product development pipelines that automate code integration and testing. A mature, productive CD pipeline leaves the team with only one decision to make at the end of the day: Does it make good business sense to deploy the application with all of the new changes?

Let’s now turn to the implementation issues that are necessary in making the move to cloud native application development.

4. Decompose Your Monolith

Conventional multi-tier monolithic applications are rarely found to function properly if they are moved into the cloud. This is because such a move is usually made with several major, unsupportable assumptions about the deployment environment. Another inhibitor is that a monolith deployment is closely bound to a static, enduring infrastructure. You’re probably thinking—quite rightly— that this is largely incompatible with putative cloud-computing expectations for an ephemeral and elastic infrastructure. Since cloud infrastructure doesn’t provide good support for monoliths, it’s necessary to make a plan for breaking a monolithic application into components that can live happily in the cloud.

5. Design a Collection of Services

In essence, a cloud native architecture is commonly seen to be a service-based architecture. Optimally, cloud native applications should be deployed as a collection of cloud services or APIs. However, while the concepts are readily understood, many developers still have a strong tendency to create tightly-coupled applications. Such apps align and bind tightly with the user interface. To leverage cloud-computing assets and benefits effectively, a cloud native application should expose supporting functions as services that are independently accessible.

When developing an application architecture for the cloud, it must be built to interact with complex, disparate, widely distributed systems. These systems can support multiple loosely-coupled applications. Such apps are built to employ many services, and also remain decoupled from the data. Developers can build up from the data and use it in communicating with services. These services can be combined into composite services—and composite applications—that remain flexible and scalable.

6. Decouple and Decompose the Data

It’s not enough to simply decompose monolithic applications into microservices. In addition, it’s also essential to decouple the data model. If a development team is given the freedom to be “autonomous”, yet must still contend with a single database, the monolithic barrier to innovation remains unmoved.

If the data has been tightly bound to an application, it can’t find a good home in the cloud. Think about it: it’s necessary to decouple the data for the same reasons we know it’s best to decompose application functions into services. The effort to decouple the data will be richly rewarded with the ability to store and process the data on any cloud instance.

Conclusion

Cloud native application development does require that you invest in a new way of thinking and some new development paradigms. However, many conventional concepts remain important, including good design and automated testing. A key takeaway is that a service architecture should be given priority, even if it will initially result in a lengthier app development lifecycle. Even if part of the cost is a temporary increase in budget, the long-term gains in efficient will perhaps make it the smartest investment your team will ever pursue.

The post 6 Strategy Elements for Building Cloud Native Applications appeared first on KongHQ.


Introducing Kong’s New GTM Partner Program!

$
0
0

Navigating the transition from monolith to microservices can be an ambitious undertaking for any organization. Today, we’re launching our new Kong GTM Partner Program to create a global ecosystem of experts to help organizations successfully transition to microservices, service mesh and other modern architectures. We’re rolling out the program with 20 inaugural members worldwide, including value-added resellers (VARs) and systems integrators (SIs), that have been carefully selected and qualified.

Global 5000 companies are racing to transform themselves into digital enterprises as they shift away from monolithic legacy systems towards decentralized, hybrid and multi-cloud architectures. This new partner program helps VARs and SIs stay at the forefront of the industry and offer their customers innovative solutions for creating and managing scalable, high-performance microservices-driven applications. We provide partners with all the tools, resources and in-depth training they need to effectively market and implement Kong Enterprise in customer accounts.

How the GTM Partner Program Works

If you’re a VAR or SI interested in joining the Kong GTM Partner Program, you can apply online at https://konghq.com/gtm-partners/. After completing the qualification phase, the partner candidate is eligible to enter the program as a Silver partner.

To ensure ongoing success in the program, partners are required to participate in a range of training activities, including the Kong Partner Kollective, an annual, interactive, hands-on session where partners:

  • Hear the latest company updates
  • Participate in hands-on technical training sessions
  • Interact with the Kong team in sales training sessions
  • Learn Kong’s sales strategy in collaborative, break-out groups
  • Gain knowledge around Kong’s product roadmap and vision
  • Provide input and observations from their territory
  • Network with other GTM partners and key Kong personnel

As Silver partners gain expertise in Kong, they may be eligible to become a Gold and eventually a Diamond level partner, which offers first-point-of-contact (Level 1) support to customers, including leading implementation, configuration, trouble-shooting and customization. Gold and Diamond partners will have tighter alignment with Kong and be eligible to participate in special invite-only programs such as the Partner Advisory Council and future certification programs.

We’re excited to roll out this program to VARs and SIs across the globe. Join us as a partner!

The post Introducing Kong’s New GTM Partner Program! appeared first on KongHQ.

Join our New Kong Champions Program!

$
0
0

Today, we’re excited to launch the Kong Champions Program, a new program that recognizes Kong’s “super” users and contributors – community members who go above and beyond in supporting Kong’s open source product – and gives them unique opportunities to make an impact on the Kong community. Kong was founded on open source DNA, and we remain committed to building and fostering a vibrant community that together achieves great things. With this new Kong Champions Program, we want to make it easier for our champions to give back, engage directly with their peers and help shape the future of their community.

Kong Champions will have the opportunity to share their Kong passion and knowledge with the larger open source community through forum discussions, blogging, talks at local events, monthly Kong Community Calls and other activities. Kong Champions are the core foundation of the community and share the following attributes: they actively engage in the community across various mediums, are vocal about their feedback for the direction of Kong, and lend a helping hand to those who need help contributing to Kong.

Are you interested in becoming a Kong Champion? Sign up to be considered for the program here!

The post Join our New Kong Champions Program! appeared first on KongHQ.

Announcing Kong’s Integration with Vault!

$
0
0

Today we’re excited to show how Kong Enterprise customers can utilize our new plugin for HashiCorp Vault for authentication and secrets management. Like the Terraform integration released last year, this new integration with Vault represents another step towards allowing Kong Enterprise customers to leverage HashiCorp’s suite of cloud infrastructure automation tools.

The Vault plugin will allow KE customers to add authentication to a Service or Route with an access token and secret token, with credential tokens being stored securely via Vault. Credential lifecycles can be managed through the Kong Admin API, or independently via Vault. Read below for a simple 5-step guide on how to get started using Vault with Kong Enterprise.

Getting Started with Kong and Vault

1. Create a Vault Object

To start, we’ll need to create a Vault to store our tokens. A Vault object represents the connection between Kong and a Vault server. It defines the connection and authentication information used to communicate with the Vault API. This allows different instances of the vault-auth plugin to communicate with different Vault servers, providing a flexible deployment and consumption model.

Vault objects can be created via the following HTTP request:

$ curl -X POST http://kong:8001/vaults \
  --data name=kong-auth \
  --data mount=kong-auth \
  --data protocol=http \
  --data host=127.0.0.1 \
  --data port=8200
  --data token=s.m3w9gdV0uMDYFpMgEWSB2mtM
HTTP/1.1 201 Created

{
    "created_at": 1550538643,
    "host": "127.0.0.1",
    "id": "d3da058d-0acb-49c2-b7fe-72b3e9fd4b0a",
    "mount": "kong-auth",
    "name": "kong-auth",
    "port": 8200,
    "protocol": "http",
    "token": "s.m3w9gdV0uMDYFpMgEWSB2mtM",
    "updated_at": 1550538643
}

This assumes a Vault server is accessible via 127.0.0.1:8200, and that a version 1 KV secrets engine has been enabled at kong-auth. The provided Vault token should have at least ‘read’ and ‘list’ permissions on the given Vault mount path, as well as ‘write’ and ‘delete’ permissions if you wish to manage credentials via the Kong Admin API. Vault KV secrets engine documentation is available via the Vault documentation.

2. Create a Consumer

To actually use our Vault plugin, we’ll next need to create a Consumer to associate with one or more credentials. The Consumer represents a developer using the upstream service. The Vault object we created in the previous step will represent the connection Kong will use to communicate with the Vault server where access and secret tokens will be stored.

We’ll need to associate a credential to an existing [Consumer][consumer-object] object. To create a Consumer, you can execute the following request:

First, we’ll need to associate a credential to an existing [Consumer][consumer-object] object. To create a Consumer, you can execute the following request:

$ curl -X POST http://kong:8001/consumers/ \
    --data "username=<USERNAME>" \
    --data "custom_id=<CUSTOM_ID>"
HTTP/1.1 201 Created

{
    "username":"<USERNAME>",
    "custom_id": "<CUSTOM_ID>",
    "created_at": 1472604384000,
    "id": "7f853474-7b70-439d-ad59-2481a0a9a904"
}
  • username(semi-optional)
    • The username of the Consumer. Either this field or custom_id must be specified.
  • custom_id(semi-optional)
    • A custom identifier used to map the Consumer to another database. Either this field or username must be specified.

If you are also using the ACL plugin and whitelists with this service, you must add the new consumer to a whitelisted group. See [ACL: Associating Consumers][acl-associating] for details. A [Consumer][consumer-object] can have many credentials.

 

3. Create an Access/Secret Token Pair

Next, we’ll need to create a pair of tokens that function as our vault-auth credentials. These tokens are defined as: an access token that identifies the owner of the credential, and a secret token that is used to authenticate ownership of the access token.

Token pairs can be managed either via the Kong Admin API or independently via direct access with Vault. Token pairs must be associated with an existing Kong Consumer. Creating a token pair with the Kong Admin API can be done via the following request:

$ curl -X POST http://kong:8001/vaults/{vault}/credentials/{consumer}
HTTP/1.1 201 Created

{
    "data": {
        "access_token": "v3cOV1jWglS0PFOrTcdr85bs1GP0e2yM",
        "consumer": {
            "id": "64063284-e3b5-48e7-9bca-802251c32138"
        },
        "created_at": 1550538920,
        "secret_token": "11XYyybbu3Ty0Qt4ImIshPGQ0WsvjLzl",
        "ttl": null
    }
}

When the access_token or secret_token values are not provided, token values will be automatically generated via a cryptographically-secure random number generator (CSPRNG).

4. Integrating Vault objects with Vault-Auth plugins

To create a seamless lifecycle relationship between Vault instances and plugins with which they’re associated, Vault objects are treated as foreign references in plugin configs. To integrate them, you’ll need to define an association with a Vault object, which can be accomplished using the following HTTP request during plugin creation:

$ curl -X POST http://kong:8001/plugins \
  --data name=vault-auth \
  --data config.vault.id=<uuid>
HTTP/1.1 201 Created

{
  "created_at": 1550539002,
  "config": {
    "tokens_in_body": false,
    "secret_token_name": "secret_token",
    "run_on_preflight": true,
    "vault": {
      "id": "d3da058d-0acb-49c2-b7fe-72b3e9fd4b0a"
    },
    "anonymous": null,
    "hide_credentials": false,
    "access_token_name": "access_token"
  },
  "id": "b4d0cbb7-bff2-4599-ba19-67c705c15b9a",
  "service": null,
  "enabled": true,
  "run_on": "first",
  "consumer": null,
  "route": null,
  "name": "vault-auth"
}

Where <uuid> is the id of an existing Vault object.

5. Using Vault credentials

Now that we’re all set up, to get started using your Vault credentials simply make a request with the access_token and secret_token as querystring parameters:

$ curl http://kong:8000/{proxy path}?access_token=<access token>&secret_token=<secret token>

Or in a header:

$ curl http://kong:8000/{proxy path} \
    -H 'access_token: <access_token>' \
    -H 'secret_token: <secret_token>'

Getting the Most Out of Using Kong with Vault

Now that you’re up and running with Kong and Vault, you’ll want to manage them effectively. Below, we’ve detailed some key considerations for optimizing your experience with Kong and Vault.

Deleting an Access/Secret Token Pair

When you need to restrict or remove access, you can delete existing Vault credentials from the Vault server via the following API:

$ curl -X DELETE http://kong:8001/vaults/{vault}/credentials/token/{access token}

HTTP/1.1 204 No Content

Token TTL

When reading a token from Vault, Kong will search the responding KV value for the presence of a ttl field. When this is present, Kong will respect the advisory value of the ttlfield and store the value of the credential in cache for only as long as the ttl field defines. This allows tokens created directly in Vault, outside of the Kong Admin API, to be periodically refreshed by Kong.

Extra-Kong Token Pairs

Kong can read access/token secret pairs that have been created directly in Vault, outside of the Kong Admin API. Currently vault-auth supports creating and reading credentials based on the Vault v1 KV engine. Create Vault KV secret values must contain the following fields:

{
  access_token: <string>
  secret_token: <string>
  created_at: <integer>
  updated_at: <integer>
  ttl: <integer> (optional)
  consumer: {
    id: <uuid>
  }
}

Additional fields within the secret are ignored. The key must be the access_token value; this is the identifier by which Kong queries the Vault API to fetch the credential data. See the Vault documentation for further information on the KV v1 secrets engine.

vault-auth token pairs can be created with the Vault HTTP API or the vault write command:

$ vault write kong-auth/foo - <<EOF
{
  "access_token": "foo",
  "secret_token": "supersecretvalue",
  "consumer": {
    "id": "ce67c25e-2168-4a09-81e5-e06187a2384f"
  },
  "ttl": 86400
}
EOF

 

We’re incredibly excited about the new capabilities this integration with Vault will provide Kong Enterprise customers. Get started with a free trial of Kong Enterprise today, and be sure to reach out on Kong Nation with any questions or feedback.

The post Announcing Kong’s Integration with Vault! appeared first on KongHQ.

Kong 0.35 Released – Featuring Integration with HashiCorp Vault!

$
0
0

Today we’re thrilled to announce general availability for Kong Enterprise 0.35. This release of Kong Enterprise contains numerous new features, including a new integration with HashiCorp Vault, huge usability improvements in Kong Manager and the Kong Dev Portal, and a host of updates to some of our most popular Kong Enterprise tools.

Read on to take a deeper look at a few of the most notable updates included in this release. We’ll detail how these features can help you get started with your Kong Enterprise journey or take your existing deployment to the next level. Be sure to sign up for the webinar reviewing all of Kong Enterprise’s features, and check out the changelog for the full release details.

What’s New?

New Vault Plugin!

We created a plugin enabling Kong Enterprise users to leverage HashiCorp Vault for authentication and secrets management. The Vault plugin will allow KE customers to add authentication to a Service or Route with an access token and secret token, with credential tokens being stored securely via Vault. Credential lifecycles can be managed through the Kong Admin API, or independently via Vault. For information on how to get started, check out our step-by-step guide!

Kong Manager

Our engineering teams have made dramatic improvements to the Kong Manager UI to provide a more dynamic management experience. With 0.35, Kong Manager provides streamlined operations and simpler navigation to improve your Kong Enterprise experience, including:

  • Easily enable, disable and view plugins in the context of a route and a service

  • Predictive type-ahead service name/ID search on routes form

  • Contextual help and error messages on entity forms
  • Breadcrumbs to more easily tell where in the Kong Manager application you are, navigate/pivot to other pages
  • Global navigation links to docs and support portal
  • Improved Info tab for cluster & config info

 

Developer Portal

In 0.35 we’ve made key performance improvements to the Kong Dev Portal and enabled more streamlined customization to build on the tighter Kong Manager integration we debuted in 0.34.

Key updates include:

  • Live editor for improved customization of portal within Kong Manager
  • Server-side rendering and major performance improvement
  • Easily add custom fields on dev sign up form 

Major Plugin Updates

Kong Session Plugin

The new Kong Session Plugin provides session support for Kong Manager and the Dev Portal. Sessions can be stored via cookie or database. See the accompanying documentation for how to configure the plugin for use with Kong Manager.

StatsD Advanced

With the updated StatsD plugin, Kong Enterprise users can now add status code ranges for responses which want to be logged and sent to StatsD server. If the KE user doesn’t provide any status code, all responses will be logged.

Other Important Additions

Kong Manager

  • Easily scan/copy IDs in the service table.
  • In-context doc links now provided globally.

 DevPortal

  • User information is no longer stored in local storage. A user exchanges credentials for a session.
  • Added “methods” to links in the sidebar on documentation in the default portal theme.
  • Ability to configure CORS header explicitly.

 Core

  • New RBAC user tokens are not stored in plaintext. If upgrading to this version, any existing tokens will remain in plaintext until either a) the token is used in an Admin API call or b) the rbac_user record is PATCHed.
  • Detailed debug tracing can be enabled which outputs information about various portions of the request lifecycle, such as DB or DNS queries, plugin execution, core handler timing, etc.

 Plugins

  • We have made a variety of updates and improvements to a host of our Enterprise plugins, including:
    • LDAP Auth Advanced
    • Response Transformer Advanced
    • Request Validator
    • Canary
    • OAuth2 Introspection
    • Forward Proxy

 

Be sure to try all the new features in Kong Enterprise 0.35, and let us know what you think on Kong Nation! For a full list of updates, fixes, and optimizations, check out the 0.35 changelog.

The post Kong 0.35 Released – Featuring Integration with HashiCorp Vault! appeared first on KongHQ.

Kong 1.2 Released! Major Performance Improvements and Newly Open Sourced Plugins

$
0
0

Kong 1.2 Released with Major Performance Improvements and Newly Open Sourced Plugins!

Today, we’re excited to unveil the latest release of our flagship open source offering – Kong 1.2! In this release, we’ve made key latency and throughput performance improvements and open sourced some previously enterprise-only plugins to enhance your overall Kong experience. In addition, our team of engineers and community have implemented numerous small fixes and improvements to expand on the Declarative Config and DB-less deployment capabilities we released in Kong 1.1 in March.

Read on below to learn more about Kong 1.2’s new features, improvements and bug fixes, and how they’ll help you better scale and manage your Kong deployment. As always, be sure to check out our Changelog to get the full details.

Core Performance Improvements

To ensure that our users get the best possible performance, we’ve made big improvements to Kong’s underlying runtime in terms of latency and throughput performance.

Below, we’ve provided some test results, which display these improvements in action. All tests were run on a c1.small.x86 bare-metal instance on Packet. For more details on the four test case scenarios, you can check out the script here.

Key Results:

  • Better Throughput Performance
    • Testing throughput for Kong 1.2 demonstrates markedly better performance across all four scenarios.
    • Substantial improvements were made to plugin management: In adding extra plugin overhead to our services, in Kong 1.1 we observe an adverse impact of 14% in throughput; in Kong 1.2 the impact is only 7.9% – an improvement of 56%.

  • 2x Throughput Improvement In Stress Test Scenario
    • To measure the impact of rebuilding the internal router object on throughput, we ran a route creation stress test – inserting 500 routes and plugins via the Admin API port in parallel to exercising the Proxy port using wrk.  With Kong 1.2.0 in eventually consistent mode, the impact was only 11% versus 50% in Kong 1.1.2
    • Given the overall baseline throughput improvement in Kong 1.2.0, running Kong 1.2.0 in eventually consistent mode yielded a performance under stress that was comparable to that of Kong 1.1 with no stress applied!

 

  • 10x Better Max Latency In Stress Test Scenario
    • Reconfiguring Kong at runtime has a much lower impact in Kong 1.2 than in previous releases, resulting in a 10x lower max latency spike, which yields more predictable performance.
    • In Kong 1.2 there are two levels of router consistency available: strict consistency, which gives a routing behavior fully compatible with Kong 1.1 and below, and the new eventually consistent mode, where the reconfiguration is entirely asynchronous, meaning that Kong will continue to use the previous configuration until the new one is fully ready.
    • In Kong 1.2, latency improvements are observed even in the strict mode, because it also runs asynchronous updates in the background in addition to the regular synchronous behavior.

 

Newly Open Sourced Plugins

As part of our ongoing commitment to Kong’s open source community, we’re happy to share that we’ve made more previously enterprise-only plugins available to the community. The proxy cache plugin is now available to all Kong users, The request-transformer plugin now includes capabilities previously only available in Enterprise, among which are templating and variables interpolation!

Key Additions

In Kong 1.2 we’ve made several important additions to improve functionality and ease of use as requested by the community, including:

Configuration

  • Asynchronous router updates: introduce a configuration properties router_consistency, with two possible values: strict and eventual. If set to eventual, router rebuild operations are performed out of the proxy path, asynchronously, reducing P95 latency when performing runtime Services/Routes manipulation.
  • Kong can now preload entities at initialization with Cache Warmup. A new configuration directive db_cache_warmup_entities was introduced, allowing users to specify which entities should be preloaded. Cache warmup allows for ahead of time DNS resolution for Services with a hostname. This feature reduces first requests latency, improving P99 latency.
  • New optional configuration properties for Postgres concurrency control: pg_max_concurrent_queries sets the maximum number of concurrent queries to the database; pg_semaphore_timeout allows you to tune the timeout for acquiring access to the database connection.

Core

  • Support for wildcard SNI matching: the ssl_certificate_by_lua phase (and stream preread) is now able to match an SNI against any registered wildcard SNI
  • HTTPS routes can now be matched by SNI: the snis route attribute can now be set for HTTPS routes and is used as a routing attribute
  • The loading of declarative configuration is now done atomically, and with a safety check to verify that the new configuration fits in memory
  • The status code for HTTPS redirects is now configurable: a new attribute https_redirect_status_code was added to Route entities

Admin API

  • Add declarative configuration hash checking, avoiding reloading if the configuration has not changed. The /config endpoint now accepts a check_hash query argument; hash checking only happens if its value is set to 1
  • Entity schema validation endpoints: the new endpoint /schemas/:entity_name/validate can be used to validate an instance of any entity type in Kong without creating the entity
  • Add memory statistics to the /status endpoint. The response now includes a memory field, which contains lua_shared_dicts and workers_lua_vms, with stats on shared dictionaries and workers Lua VM memory usage. Additionally, the endpoint supports two optional query arguments: unit – the size, and scale -the number of digits to the right of the decimal points when in the new human-readable memory strings

Other Improvements and Bug Fixes

Kong 1.2 also features several fixes and improvements to the core, CLI, PDK, and plugins. For the full details, check out the Change Log here!

As always, the documentation for Kong 1.2 is available here. Additionally, as mentioned above, we will be discussing the key features in 1.2 in subsequent posts and on community calls, so stay tuned!

Thank you to our community of users, contributors, and core maintainers for your continuing support of Kong’s open source platform.

Please give Kong 1.2 a try, and be sure to let us know what you think!

Kong Nation

As usual, feel free to ask any question on Kong Nation, our Community forum. Learning from your feedback will allow us to better understand the mission-critical use-cases and keep improving Kong.

Happy Konging!

 

The post Kong 1.2 Released! Major Performance Improvements and Newly Open Sourced Plugins appeared first on KongHQ.

Get Sent to Kong Summit 2019 with this Dear Boss Letter

$
0
0

Kong Summit 2019 will be here before you know it, and we hope to see you October 2-3 in San Francisco, California, along with Kong’s community of engineers, architects, and microservices thought leaders.

With a packed two-day schedule, Kong Summit will be a unique opportunity to discover, learn and try the technologies, practical solutions and techniques needed to create modern service architectures.

Now’s the time to get approval for you (or your team) to attend, if you haven’t already, and to secure a Super Early Bird Ticket.

We’ve put together the letter below that you can customize and send to your manager or teammates.

 

Justification Letter

Subject line: Please send me to Kong Summit 2019: Building the Next Era of Software

Dear [Boss/Team],

I’d love to attend Kong Summit 2019: Building the Next Era of Software, which takes place October 2-3, 2019 in San Francisco, California at the Hilton Union Square. It will be a great opportunity for me to not only learn about new innovations and best practices around microservices and hybrid infrastructures but also to meet other [insert role] who are tackling similar challenges at their organizations.

Kong Summit explores the latest innovations in APIs, microservices, service mesh and cloud native technologies, including how organizations can effectively implement those technologies today. I’m confident my attendance at Kong Summit 2019 will provide valuable training to help us achieve [fill in team goal] this year.

A ticket to Kong Summit includes two full days of sessions, including technical and thought leadership tracks, hands-on workshops, meals and entrance to the event’s party on Day 1. If I register by July 15, the conference ticket is just $299. With travel and accommodations totaling [fill in cost], the all-in cost would be [total].

Thank you for considering my request to attend this event. I’m very excited about it and believe it will be a worthwhile investment for our company. I look forward to your thoughts on it.

Thanks,
[Your Name]

 

 

The post Get Sent to Kong Summit 2019 with this Dear Boss Letter appeared first on KongHQ.

Kong Celebrates Pride 2019!

$
0
0

This year’s Pride month was particularly special as we commemorated the 50th anniversary of the Stonewall Riots — a series of powerful protests in 1969 that launched the gay liberation movement and fight for LGBTQIA+ equality in the United States. 50 years later, we celebrate the significant strides that have been made for members of the LGBTQIA+ community to live their lives with authenticity and integrity, but we recognize that we still have a ways to go to achieve full equality.

It continues to be equally as important for Kongers to be loud and proud during the month of June. From holding a happy hour where we took turns sharing what Pride Month means to us to marching in the SF Pride Parade — our June was decked out in full rainbow celebration!

We want to make sure that no one at Kong feels they ever have to choose between their career and living their lives with authenticity and integrity, and we continue to honor and commemorate pride month while celebrating our differences every single day.

Follow us on Instagram to see more behind-the-scenes action at Kong! Interested in joining us? We’re hiring!

Kongers celebrating Pride Month 2019

Kongers celebrating Pride Month 2019

The post Kong Celebrates Pride 2019! appeared first on KongHQ.


Kong Welcomes Depop as a Customer to the Kong Family!

$
0
0

As we grow our footprint globally, we’re gaining strong momentum in the UK. We recently added several fast-growing, London-based organizations to the Kong Enterprise community. These organizations are turning to Kong to help power their core business applications and accelerate application development as they shift to microservices-driven architectures.

Among these is Depop, the popular peer-to-peer social shopping app with over 14 million users. The company turned to Kong when it was relaunching its mobile application, which is powered entirely by APIs. The app runs in Docker and Kubernetes on Amazon Web Services and is composed of more than 60 microservices. After evaluating a variety of solutions in side-by-side performance testing, Depop chose Kong Enterprise due to its superior performance, ability to scale and best overall feature set. Because Kong Enterprise is container-ready, it could be used right out of the box, unlike other solutions that had a much more complex implementation. Next, Depop plans to use Kong Enterprise for its website and to manage its internal APIs as well.

We’re excited to welcome Depop to the Kong family and help them drive real business impact!

 

Depop - Kong Enterprise

The post Kong Welcomes Depop as a Customer to the Kong Family! appeared first on KongHQ.

Kong Makes it onto Forbes’ List of Next-Billion Dollar Startups!

$
0
0

This year has been momentous in Kong’s journey as we have tripled in size, outgrown multiple offices and announced a successful Series C round. Today, we celebrate yet another huge milestone with the news that Forbes has included us in their annual list of Next-Billion Dollar companies. We are honored to receive such recognition and to be on the same list as 24 other ground-breaking companies that I can imagine are as elated as we are right now.

I want to take a moment to recognize every team member who has put in countless hours in fueling our growth, our incredible customers, and of course, our investors who have continued to believe in our vision. We continue to stay laser-focused on building an exceptional company for the long term. I invite you to join us for Kong Summit 2019 and see for yourself why Kong is being considered one of the most likely startups to reach $1 billion in value!

The post Kong Makes it onto Forbes’ List of Next-Billion Dollar Startups! appeared first on KongHQ.

Karhoo Hails Kong as its API Vendor of Choice

$
0
0

The UK continues to be a growing hotbed for tech innovation. At Kong, we’re seeing fast adoption for our Kong Enterprise platform in the UK from organizations across a wide range of industries, including e-commerce, financial services, on-demand services and travel/hospitality, among many.

This week, Karhoo joins us as one of our newest UK-based customers. Karhoo provides a mobility marketplace by bringing together licensed fleets with global travel operators to create smarter mobility solutions for travellers and citizens. Its platform provides simple integration options (API/SDK/whitelabel) allowing partners to natively offer e-hailing and pre-booking into their applications and online channels. 

Facing scalability challenges, the developer team made a complete transition to microservices, adopting Google Cloud Platform, Kubernetes and Docker. Now running 120 microservices, Karhoo selected Kong Enterprise to automate the development process, and to measure, manage and secure its APIs. 

We look forward to helping Karhoo in their journey to microservices!

 

Kong - Karhoo

The post Karhoo Hails Kong as its API Vendor of Choice appeared first on KongHQ.

Cargill Bridges Legacy and Cloud Native with Kong Enterprise

$
0
0

As we continue to expand across new industries and regions, we’re excited to share Cargill’s digital transformation story and how it turned to Kong Enterprise to create a unified API platform across existing legacy and newer cloud native systems. We talked to Jason Walker, senior enterprise architect at Cargill, about their journey to microservices.

What was the driver for Cargill’s move to microservices?

We put Agile, DevOps and CI/CD at the heart of everything we do, and we realized that we needed new tooling to match the new ways we wanted to build products and services. We have a wealth of data from 150 years in business, but we needed to create a simple way for folks to access it and use it to enhance our stakeholders’ experiences. By building an API-first microservices architecture, we knew we could simultaneously provide access to those data and resources and take deployment concerns out of our developers’ workflows.

Why did you choose Kong? What benefits have you seen with Kong?

We wanted to first centralize the platform to make it consistent, then decentralize it to maximize scale, but the legacy API management vendors we looked at weren’t aligned with our vision for decentralization. Kong is lightweight and fast enough to run at the edge, with Kubernetes or in a mesh, which fits our vision perfectly. Together with Kong, we achieve the low latency needed to deploy high-performance, cloud native microservices and the flexibility to seamlessly integrate them across all our systems.

We also wanted to make sure that we were building for scale and enabling the services we built to be re-used across the company, not doing things on a one-off basis. Reusable capabilities need to have one single source of truth, and Kong ensures that all of our constituents are consistent in how they use our services. With Kong, we can just give them a platform-in-a-box that ensures standardization, security and compliance.

With the number of new services we’re adding and creating from decoupling legacy applications into microservices, manually managing our infrastructure would be nearly impossible. Using Kong with Kubernetes allowed us to make decentralization a reality. Kong and Kubernetes let us define how we want our services to behave and then automatically scale the services up or down depending on the situation. The combination of the two allows us to optimize for resource efficiency, resiliency and service availability all in one automated solution.

What results have you seen from implementing Kong?

  • Up to 65x faster deployment with automated validations
  • 450+ new digital services created in the past six months
  • Dynamic infrastructure that auto-scales up and down with demand

What additional plans do you have for using Kong in the future?

It was critical for us that the API platform we chose would allow us to move fast now, but also take us where we want to go. Kong’s view of where the world of software architecture is headed aligns well with ours at Cargill, and we think [Kong] Brain and [Kong] Immunity will really help make that vision a reality.

Want to learn more about Cargill’s digital transformation journey? Read more here!

 

Jason Walker, Cargill - Sandeep Singh Kohli, Kong

Jason Walker and Sandeep Singh Kohli sharing a high five at Kong Summit 2018

The post Cargill Bridges Legacy and Cloud Native with Kong Enterprise appeared first on KongHQ.

10 Ways Microservices Create New Security Challenges

$
0
0

In the current microservices DevOps environment, there are tough new and evolving challenges for developers and teams to consider on top of the more traditional ones. From worsening versions of already common threats to new-generation evolving threats, new perspectives are required on securing microservices. These new perspectives may not be intuitive for many otherwise sophisticated DevOps and data teams. 

As thoroughly detailed previously in Machine Learning & AI for Microservices Security, when dealing with microservices, we’re ultimately talking about more code – way more code. More lines of code means a greater risk of introducing vulnerabilities. Microservices entail much more complexity when it comes to security. So how can companies, their IT teams, and development teams all stay on top of such an amorphous threat ecosystem where there are way more places for bugs to hide, as well as more ways to conceal indicators of attack and compromise ? 

As Chris Richardson has noted, microservices could simply be described as an architectural style. But whether that description makes it a simpler world or not, Richardson concurs with most of the industry thinking that complexity is the distinguishing hallmark of the microservices architectural model. All that complexity exposes more of your DevOps environment in many ways. Let’s examine the challenges this situation presents in each of their diverse masks.

1. Greater complexity = expanding attack surface

Since microservices communicate with each other to such an extent via APIs independent of both machine architecture and even programming language, this creates more attack vectors. More interactive services also increase the total possible points of critical failure. This requires keeping DevOps teams and security teams one step ahead of such microservice interruptions. When a microservice is breaking down, it is not operable, and when microservices are not operable, it’s not as easy to see how they are contributing to security threats, or whether such an issue is part of an ongoing attack in action.

2. Transitioning monolithic app functions to microservices 

Transitioning from a monolithic DevOps environment to a microservices environment is an unwieldy trade-off at best, but it is becoming a necessary one to stay competitive and support growth. As Jan Stenbreg has commented, the need for varying lifecycles for APIs is at the crux of the move from monolithic to microservices for many organizations in 2016 and that has continued on through today. 

Since it becomes much more difficult to maintain a microservices setup than a monolithic one, each microservice setup may evolve from a wide variety of frameworks and coding languages. The bifurcating complexities of stack support will influence decisions each time new microservices are added into the mix. Each additional language used to create each new microservice creates an impact on security via ensuring stability with the existing setup.

3. Traditional logging becomes ineffective

The new DevOps microservices ecosystem is spread out–way out. Because microservices are distributed, stateless, and therefore necessarily independent, there will be more logs. The challenge here in this is that more logs threaten to camouflage issues as they pop up. With microservices running from multiple hosts, it becomes necessary to send logs across all of those hosts to a single, external, and centralized location. For microservices security to be effective, user logging needs to correlate events across multiple, potentially differing platforms, which requires a higher viewpoint to observe from, independent from any single API or service.

4. New monitoring stress

Monitoring presents a new problem of degree with microservices. As new services are piled on the system, maintaining and configuring monitoring for them all is itself a kind of a new challenge. Automation will be required just to support monitoring all those changes at scale for affected services. Load balancing is part of the security awareness that monitoring must account for, not just attacks and subtle intrusions.

5. Application security remains a factor

Applications are in one way or another the bread and butter of microservices teams. API security doesn’t simply go away with microservices security in place. With each additional new service, there emerges the same old challenge of maintenance and the configuration of API monitoring from the API team perspective. If application monitoring is not end-to-end, as Jonah Kowall concurs, it becomes too taxing to isolate or address issues. Without automation, it is less likely teams will be able to monitor changes and threats at scale for all services exposed.

6. More requests

The method of using request headers to allow services to communicate data is common. This can simplify the number of requests made. But when a large number of services are utilizing this method, team coordination will need to increase and become efficient and itself also must become simplified. Furthermore, with larger numbers of requests, developers will have to be able to comprehend the timeframe for processing such requests. The serialization and deserialization of requests to services are likely to build up and become unwieldy without adequate tools and methods in place just to keep tabs on requests and tie them into an autonomous security apparatus able to work at scale.

7. Fault tolerance

Fault tolerance in the microservices environment model grows more complex than a legacy monolithic system. Services must be able to cope with service failures and other timeouts occurring for mysterious reasons. When such service failures pile up, this can affect other services, creating clustered failures. Microservices require a new focus on interdependence and a new model for ensuring that stability across services – easier said than done if a centralized microservices security platform is not in place.

8. Caching

While caching helps to reduce the sheer frequency of requests made, this is a double-edged sword. With the heightened capability caching brings to the DevOps environment, these caching requests inevitably grow to handle a growing number of services. The excess reserve caching provides can grow the complexity as well as the sheer need for inter-service team communication. Automating, ordering, and optimizing this communication becomes a new requirement that may not have existed before with a previously monolithic DevOps environment model.

9. Collaborative security efforts with DevOps

As the responsibility for microservice integrity spreads out between teams, DevOps is looped into security to a new level of intensity. Thinking with “one security brain” becomes a collective, rather than a hierarchical endeavor. Gone are the days when a security officer could dictate requirements downward in any meaningful way. Collaboration and regular contact points at the macro and micro level become a necessity and must be worked into the data/Dev culture. Good security design is a matter of collaboration and congealing needs and desires against the baseline of known and emerging threats to microservice and the virtualized structures they combine to create.

10. Forward-looking security design

For all of the reasons we’ve seen unfurled above, we arrive at the priority of forward-looking security design. Far from indicating a “my way” approach from a siloed security authority team, we have come to a point in DevOps culture where all possible factors contribute to considerations that form the basis of a stable security policy and protocol formation process. Without knowing how everyone is going to be impacted inside their changing day-to-day team concerns by security practices, creating a good security design is virtually impossible. 

Agile scrum may become the replacement of the authority silo in the microservices model, but how can teams devote more time to scrum on security issues without detracting from their team duties? Due to the many ways complexity is changing the DevOps scene and causing systems to rethink their modus operandi, autonomy will need to replace the human factor not just for service integrity, but for the entire realm of security concerns, if DevOps teams are to stay focused on their core activities. 

In conclusion

The security model must take all 10 of the above factors into account if it is to thrive and grow into a self-sufficient, fully-adequate marriage of tools and security culture. Such a holistic system needs to be capable of acting autonomously on known threats, of a continuous machine-learned contribution to the known threat baseline for all those individual, unique APIs, services and diverse containers, as well as some ability to predict where threats might pop up next on this specific environment, rather than merely relying on the baseline alone and thereby creating a false “normal environment” that doesn’t match up with the actual environment.

For more information about taking the logical next steps in securing a microservices DevOps workplace, read about the 5 Best Practices for Securing Microservices at Scale.

The post 10 Ways Microservices Create New Security Challenges appeared first on KongHQ.

Announcing the Kong Summit Diversity Scholarship Program!

$
0
0

We are proud of the diverse group of individuals that makes up the Kong Community. That’s why we are committed to making sure Kong Summit is an inclusive event for all of our community members. We are excited to announce the Kong Summit Diversity Scholarship Program. We will be giving out a number of free tickets to individuals from underrepresented/marginalized groups. 

Kong Summit is taking place in San Francisco, CA on October 2-3, 2019 at the Hilton Union Square. The annual event will bring together members of the Kong user community, ecosystem contributors and industry thought leaders to explore what it takes to build the next era of software. The agenda will include sessions from industry leaders, expert panels on emerging technologies and trends, and workshops to expand your technical expertise. 

To be eligible to apply for the scholarship, applicants must be from an underrepresented group in technology, cloud or open source communities. This includes but not limited to: LGBTQ+, women, people of color, people with disabilities, veterans/military service, people who are without any financial assistance. Additional requirements include: 

  • Applicants local to the San Francisco Bay Area and/or willing to pay for their own travel
  • Must be at least 18 years of age 
  • Agree to follow our Code of Conduct 

Ready to apply? Fill out the application

We look forward to meeting you at Kong Summit! 

The post Announcing the Kong Summit Diversity Scholarship Program! appeared first on KongHQ.

S3 Breach Prevention: Best Practices for Enterprise Cloud Security

$
0
0

When a data breach occurs involving a cloud service, the impulsive reaction is to denounce using the cloud (at least for sensitive information). Since cloud security is not widely understood, it may be difficult to delineate it in the context of more general information security.

Out of the box, AWS offers multiple strategies for account security, configuration management, and disaster recovery. It provides top-grade physical security for its data centers and network-wide protection from threats such as distributed denial-of-service (DDoS) attacks. Its underlying infrastructure and platforms are already PCI (Payment Card Industry) and HIPAA (Health Insurance Portability and Accountability Act) compliant. For any leaders who have to solve each of those problems from scratch, attempting to meet a CSP’s level of protection on their own would prove intractable.

In short, avoiding the cloud would introduce more, rather than fewer, security concerns for a company. So, what do companies need to do to ensure that they can at least cover their portion of shared responsibility? We will examine how an S3 breach could happen, what can be done to mitigate it and how to ensure that occasional vulnerabilities do not result in full-on breaches.

How could data in an S3 bucket be breached?

As with many issues in information security, the problem is often far more straightforward than the solution:

  1. If an attacker has stolen credentials to an account that has access to an S3 bucket, all the lower-level network protection in the world will not stop the attacker from at least opening the bucket. A breach involves more than opening the bucket, though—an attacker would also need to be able to extract the data.
  2. If an attacker can extract massive amounts of content without triggering alarms or activating automated prevention, the defender will not know about the breach. That is, not until the content is posted publicly or exploited in a way that harms the company (and possibly its customers). But even if the attacker can extract content, it is not useful unless the content is understandable.
  3. If an attacker can read or decrypt the content, there is nothing to prevent it from spreading and being exploited. Although there are many other concerns to review as well, storing the material in a plain (i.e., unencrypted) format ensures that the confidentiality or integrity of the content can be compromised.

Importantly, it is worth noting that a breach involves multiple points of failure. In this case, it requires a combination of stolen credentials for access, unmonitored and unlimited extraction, and readable content to result in a bona fide breach. When the explanation for a breach is “human error” or “firewall misconfiguration,” it is easy to imagine that just one thing went wrong. In reality, many other security controls needed to be absent or ineffective for one flaw to become catastrophic.

How can data in an S3 bucket be protected from a breach?

There are more ways an attacker can steal credentials than by reading a Post-It note. Frustrating password complexity requirements are only more likely to cause someone to write a password down somewhere. A password is supposed to be “something you know,” but it is quickly becoming impossible to remember. As a result, we use other factors to help “remember” passwords, such as proving “something you are” when we use our fingerprints to open 1Password. A public GitHub repository or an unsecured host could easily give away credentials. Often, these credentials are intended for service accounts (i.e., non-human, machine accounts) but could be used by an attacker to access the same services as well.

Static credentials, such as passwords and API keys, should not be the only criteria for obtaining access, given how easy they are to compromise. A combination of other factors, such as geolocation, hardware tokens or detection of user behavior, could be implemented to support authentication.

In AWS Identity and Access Management (IAM), there are several different ways to enable multi-factor authentication. A user must use a rotating token (“something you have”) in addition to the user’s password (“something you know”) to gain access to the account. A company can limit access to its resources so that a user needs to provide a rotating code each time the user wants to conduct a sensitive activity. For example, with S3, there is an option to require MFA if a user wants to delete an object.

But suppose an attacker did somehow gain credentials for each factor of authentication? Perhaps the attacker is a malicious insider. Maybe the attacker stole the hardware token from an employee’s desk and took a picture of the password written on a Post-It note. If the employee was too embarrassed or too preoccupied to report the missing token, the attacker could easily bypass multi-factor authentication (MFA).

In that case, the next line of defense is to limit what each account can do. This limitation includes defined permissions and automated responses if an account exhibits suspicious behavior. AWS’s managed services support many of the technical controls that companies would otherwise need to buy individually or build from scratch. Consider the following solutions:

  • Get an overview of current security settings with Trusted Advisor
  • Track user activity and API usage with CloudTrail
  • Detect misconfiguration and compliance violations with Inspector
  • Analyze logs and automatically detect anomalies with GuardDuty

Even with this host of technical controls, the underlying assumption is that an organization has policies in place to prevent excessive or accidental use of access even under normal circumstances. Following the principle of least privilege makes an account compromise less impactful.

The absolute worst-case scenario would be that all users start with Administrator access. For AWS, all users (save for the root account) start with no permissions, so that every privilege a company grants them must be deliberate. From that point, security relies on how organizations decide to give access. Suppose that one account never has a reason to download or delete sensitive files. Also, suppose that the company fully trusts the user who owns that account. It still would not make sense to grant limitless permissions to the account due to the potential for accidents, let alone a malicious compromise.

S3 buckets are private by default—no one besides the owner may have access to the buckets until their group explicitly receives permission. Although S3 has a particular version of ACLs, the current best practice is to set user policies or bucket policies. The choice depends on which perspective (i.e., users or buckets) is more natural for an organization to manage access.

Besides determining who may read and write the contents of the bucket, there is a related concern about content distribution. Ideally, people outside of an organization would never access buckets directly. In case the contents are intended for broad, public access, a company can manage distribution through an intermediary service such as CloudFront rather than directly from S3.

But suppose that with these controls in place, alas—an attacker still gets through. The attacker has stolen credentials for an Administrator account (i.e., they have the highest level of permissions). Whether by chance or by having conducted extensive reconnaissance, the attacker has not done anything suspicious that would trigger any alarms.

Even if this were the case, no one should be able to download 100,000 medical files without a company’s security batting an eyelid. This situation calls for data-loss prevention (DLP) controls. AWS Lambda can be configured to trigger based on S3 events, CloudTrail events, GuardDuty events or any other number of services. It could be set to fire off and prevent suspicious activity automatically. Many third-party services offer DLP support, following a similar pattern of constant detection and automated enforcement.

In the absolute worst-case scenario, suppose an attacker successfully downloads the contents of the S3 bucket. The attacker carefully planned to download the contents in such tiny fragments that they match the existing pattern of innocent users, so they snuck past all of the DLP controls. Now, there is only one defensive measure left: encryption.

By using client-side or customer-provided keys, a company can maintain complete ownership of how its data is encrypted and what can unlock it. That way, it can establish additional barriers that separate the key from the lock. At this point, a company can realize the main benefit of on-premise security in support of the cloud. Even if a company’s assets are entirely cloud-based, it still might retain the unique ability to unlock all of them with an on-premise key. This type of client-based key management is often a requirement for compliance. In the case of the breach described above, it would render an attack harmless unless the attacker also had access to the key. To gain access to an on-premise key would be an entirely separate feat well outside the scope of the attack in the diagrams.

How can companies ensure that occasional vulnerabilities cannot be exploited?

Companies are only able to do their best, and as such, they have an impossible task in covering, let alone knowing, all of their vulnerabilities. Still, that does not imply that (1) all vulnerabilities can or will be exploited, or (2) that it is futile to resist. Attackers are at a significant advantage in that they only need to find one way in and out. Companies need to defend all of those points. Still, before finding an opening, attackers will often encounter many failed attempts along the way. That is to say that things would be much worse if companies didn’t try at all and that things could at least be better if they tried harder.

The key takeaways:

  • Existing, managed services are often a much simpler and safer option than handcrafting one’s own.
  • The aspect of shared responsibility that falls on a cloud customer would still be the customer’s responsibility if it were strictly on-premise.
  • A company’s defense needs to be an in-depth, automated, and diverse combination of:
    1. administrative controls that limit account privileges
    2. configuration controls that detect vulnerable settings and compliance violations
    3. detective and preventive controls that passively monitor and actively respond to account activity
    4. other technical controls such as encryption and MFA with rotating keys that deflect the impact of compromise

The post S3 Breach Prevention: Best Practices for Enterprise Cloud Security appeared first on KongHQ.


Building Metrics Pipeline for High-Performance Data Collection

$
0
0

This is part three in a series discussing the metrics pipeline powering Kong Cloud.

In previous posts in this series, we’ve discussed how Kong Cloud collects, ships, and stores high volumes of metrics and time-series data. We’ve described the difference between push and pull models of collecting metrics data, and looked at the benefits and drawbacks of each from a manageability and performance perspective. 

 

This post will detail how our engineering team studied the integration of our metrics collection code with the services that actually process customer traffic, ensuring that our data collection was effective, correct, and most importantly, did not interfere with user traffic in terms of performance or latency.

Kong Cloud’s platform is powered at the edge by a tier of OpenResty servers. To briefly recap, OpenResty is a bundle of the venerable Nginx web server, and LuaJIT, providing a stable, high-performance web application and proxy platform. Leveraging OpenResty at the edge allows us to write complex business logic (e.g., data transformation, communicating with third party services, etc) in a very efficient manner. Plumbing code related to mundane but critical tasks, like memory management socket lifecycle handling, is delegated to the core OpenResty codebase, letting us focus on the correctness of our business logic. This paradigm lets us ship changes and new features to our edge tier in a matter of minutes, not the weeks or months that writing native Nginx module code would require.

That said, OpenResty’s safety and stability does not mean that we can be blasé about the code we ship. Any new logic introduced at the edge of our infrastructure introduces complexity that needs to be validated. This validation extends beyond logic correctness – we also need to ensure that the code we ship doesn’t introduce unnecessary latency. Keeping latency at a minimum at our edge tier allows us to maintain a minimally-sized fleet of edge machines in each datacenter, reducing cost and operational complexity.

We kept these principles closely in mind while developing the logging code used by edge tier machines as part of our metrics pipeline. Our previous blog post describes the basic architecture we use in OpenResty to store and periodically ship metrics data. The implementation itself relied on code patterns that were designed to leverage OpenResty APIs that are known to be friendly to the JIT compiler (e.g., including the resty.core library). We made a deliberate effort to avoid relying on data structures that could only be iterated by Lua code that can not be JIT compiled, such as the infamous pairs built-in function and used FFI data structures to reduce memory footprint.

Following the first pass of development, we took the opportunity to profile our logging library in a lab environment to gauge the expected performance hit it would introduce in our environment. We first set up a lab environment with a single Nginx worker process serving a local file from disk, with no Lua phase handlers or custom business logic:

 

$ wrk -c 20 -d 5s -t4 http://localhost:8080/index.html

Running 5s test @ http://localhost:8080/index.html

  4 threads and 20 connections

  Thread Stats   Avg Stdev     Max +/- Stdev

    Latency   218.65us 78.51us   7.68ms 96.75%

    Req/Sec    23.08k 1.43k   24.49k 91.18%

  468517 requests in 5.10s, 360.11MB read

Requests/sec:  91864.85

Transfer/sec:     70.61MB

 

The idea here was simply to get a rough baseline of where we’d expect to see from Nginx’s throughput and client latency. The raw numbers themselves here aren’t as important as the delta in values we’ll see in future tests. The change ratio gave us an idea of the overall impact of the running code, given the parameters of the requested benchmark.

 

Next, we setup the same benchmark, with the addition of an empty Lua log phase handler:

 

# nginx config

log_by_lua_block{

  -- nothing to see here!

}

 

$ wrk -c 20 -d 5s -t4 http://localhost:8080/index.html

Running 5s test @ http://localhost:8080/index.html

  4 threads and 20 connections

  Thread Stats   Avg Stdev     Max +/- Stdev

    Latency   263.71us 286.15us  12.47ms 99.59%

    Req/Sec    19.73k 4.13k   76.84k 99.50%

  394735 requests in 5.10s, 303.40MB read

Requests/sec:  77402.86

Transfer/sec:     59.49MB

 

This gave us a rough idea of how much impact simply running the Lua VM would require. We saw about a 15% drop in max throughput in this test. Again, the idea here was not to perform a highly rigorous experiment, but rather to get a few simple benchmarks to make sure the inclusion of our logging library wouldn’t cause unexpected drops in performance.

 

We set up the benchmark for a third run, this time with our logging library included:

# nginx config

log_by_lua_block {

  local logger = require “edge.logger”

  logger.exec()

}

 

 $ wrk -c 20 -d 5s -t4 http://localhost:8080/index.html

Running 5s test @ http://localhost:8080/index.html

  4 threads and 20 connections

  Thread Stats   Avg Stdev     Max +/- Stdev

    Latency   392.11us 219.26us  14.60ms 98.21%

    Req/Sec    13.03k 697.97    13.61k 95.10%

  264432 requests in 5.10s, 203.25MB read

Requests/sec:  51852.36

Transfer/sec:     39.85MB

 

Well, that was unexpected. We knew that we were going to see some drop in throughput, but running at little more than half the raw request rate of Nginx was indicative of a real bottleneck in the logging library. After a fruitless few hours of eyeballing the code looking for bad patterns, we turned to LuaJIT’s profiling tools. We ran the benchmark again with LuaJIT’s jit.dump module enabled, giving us a verbose data set detailing the JIT compiler’s behavior. JIT dumps are a useful tool, dumping out Lua bytecode, intermediate representation (IR), and generated machine code for all traces the JIT compiler generates. When the compiler cannot generate a trace, the profiler will spit out the failure reason at the point of trace abort. For example, when LuaJIT tried to compile a trace that relied on using pairs to iterate over a table, it generated the following output (most of it removed for brevity):

 

0000  . FUNCC               ; ipairs_aux

0066  JITERL  16 21     (logger.lua:497)

0067  GGET   13 15     ; “pairs”     (logger.lua:510)

0068  TGETV   14 5 9       (logger.lua:510)

0069  CALL   13 4 2       (logger.lua:510)

0000  . FUNCC               ; pairs

0070  ISNEXT  16 => 0093       (logger.lua:510)

—- TRACE 22 abort logger.lua:510 — NYI: bytecode 72

 

This particular trace wasn’t concerning, as it ran on a code path that executed infrequently on a background timer, so it wasn’t responsible for slowing down client requests. As we looked through the rest of the dump output, one repeated abort, in particular, caught our eye:

0009  . GGET     4 0 ; "ipairs"       (logger.lua:358)

0010  . MOV     5 2 (logger.lua:358)

0011  . CALL     4 4 2     (logger.lua:358)

0000  . . FUNCC               ; ipairs

0012  . JMP     7 => 0021     (logger.lua:358)

0021  . ITERC    7 3 3     (logger.lua:358)

0000  . . FUNCC               ; ipairs_aux

0022  . JITERL   7 21 (logger.lua:358)

—- TRACE 25 abort logger.lua:359 — inner loop in root trace

 

This trace was occurring in the hot path (code that was called as part of the logging phase on every request), so we suspected this was at least a substantial part of the slowdown. The cause of failure here was not an operator that couldn’t be compiled, but rather using the ipairs iterator inside of another trace. This was a bit confusing at first- ipairs itself can be compiled by the JITer, so it wasn’t immediately clear why this was failing. Taking a look at the chunk of code where this trace aborted:

 

350 local function incr_histogram(tbl, tbl_length, histogram, value)

351   if value == nil then

352     return

353   end

354   — increment sum which is at last index

355   tbl[tbl_length – 1] = tbl[tbl_length – 1] + value

356 

357   for i, uplimit in ipairs(histogram) do

358     if value <= uplimit then

359       tbl[i – 1] = tbl[i – 1] + 1

360       return

361     end

362   end

363 end

In this case, a root JIT trace had been created at the start of the incr_histogram function. The for loop over the ipairs iterator on line 357 generated a new side trace, returning out of the builtin ipairs once it found the appropriate histogram bucket to increment. It seemed that return out of ipairs was causing the trace to abort. On a hunch, we replaced the call to ipairs with an integer-based loop:

diff --git a/files/lualib/edge/logger.lua b/files/lualib/edge/logger.lua

index 5860c5b..3b83106 100644

— a/files/lualib/edge/logger.lua

+++ b/files/lualib/edge/logger.lua

@@ -354,8 +354,8 @@ local function incr_histogram(tbl, tbl_length, histogram, value)

   — increment sum which is at last index

   tbl[tbl_length – 1] = tbl[tbl_length – 1] + value

 

-  for i, uplimit in ipairs(histogram) do

–    if value <= uplimit then

+  for i = 1, tbl_length – 1 do

+    if value <= histogram[i] then

       tbl[i – 1] = tbl[i – 1] + 1

       return

     end

 

This allows the LuaJIT engine to generate a side trace inside our loop (which, incidentally, may only execute over one iteration), and exit without being aborted by trying to return from within the builtin ipairs. Re-running the benchmark test with this change alone resulted in marked improvement:

 

 $ wrk -c 20 -d 5s -t4 http://localhost:8080/index.html

Running 5s test @ http://localhost:8080/index.html

  4 threads and 20 connections

  Thread Stats   Avg Stdev     Max +/- Stdev

    Latency   293.55us 94.27us   6.61ms 98.36%

    Req/Sec    17.19k 466.69    17.65k 86.27%

  348846 requests in 5.10s, 268.13MB read

Requests/sec:  68406.82

Transfer/sec:     52.58MB

 

That’s a substantial improvement both in max throughput and request latency seen by the benchmark client. We were able to corroborate this improvement in performance with an increase in the number of compiled traces reported by SystemTap and the OpenResty stapxx set of tools.

This change highlights the need to thoroughly study and understand the behavior of code. 

Our metrics pipeline is a crucial element of Kong Cloud. It gives us keen insight into the performance of the various Kong clusters we manage. Its position within our infrastructure requires that it be highly performant and stable. As we fleshed out the development of the OpenResty library that delivered metrics from our edge tier, we kept a close eye on performance and compiling behaviors, and were able to leverage LuaJIT’s excellent debug tooling to knockdown performance issues that weren’t caught during code review or integration testing, before we shipped them into production.

The post Building Metrics Pipeline for High-Performance Data Collection appeared first on KongHQ.

Kong 1.3 Released! Native gRPC Proxying, Upstream Mutual TLS Authentication, and Much More

$
0
0

Today, we are excited to announce the release of Kong 1.3! Our engineering team and awesome community has contributed numerous features and improvements to this release. Based on the success of the 1.2 release, Kong 1.3 is the first version of Kong that natively supports gRPC proxying, upstream mutual TLS authentication, along with a bunch of new features and performance improvements.

Read on below to understand more about Kong 1.3’s new features, improvements and fixes, and how you can take advantage of those exciting changes. Please also take a few minutes to read our Changelog as well as the Upgrade Path for more details.

Native gRPC Proxying

We have observed increasing numbers of users shifting towards Microservices architecture and heard users expressing their interests in native gRPC proxying support. Kong 1.3 answers this by supporting gRPC proxying natively, bringing more control and visibility to a gRPC enabled infrastructure.

Key Benefits:

  • Streamline your operational flow.
  • Add A/B testing, automatic retry and circuit breaking to your gRPC services for better reliability and uptime.
  • More observability
  • Logging, analytics or Prometheus integration for gRPC services? Kong’s got you covered.

Key Functions:

  • New protocol: The Route and Service entity’s protocol attribute can now be set to grpc or grpcs, which corresponds to gRPC over clear text HTTP/2 (h2c) and gRPC over TLS HTTP/2 (h2).

Upstream Mutual TLS Authentication

Kong has long supported TLS connection to the upstream services. In 1.3, we added the support for Kong to present a specific certificate while handshaking with upstream for increased security.

Key Benefits:

  • Being able to handshake with upstream services using certificate makes Kong even better at industries that require strong authentication guarantees, such as financial and health care services.
  • Better security
  • By presenting a trusted certificate, the upstream service will know for sure that the incoming request was forwarded by Kong, not malicious clients.
  • Easier compliance
  • More developer friendly
  • You can use Kong to transform a Service that requires mutual TLS authentication to methods that are more developer agnostic (for example, OAuth).

Key Functions:

  • New configuration attribute: The Service entity has a new field client_certificate. If set, the corresponding Certificate will be used when Kong attempts to handshake with the service.

The Sessions Plugin

In Kong 1.3, we have open sourced the Sessions Plugin (previously only available in Kong Enterprise) for all users to use. Combined with other authentication plugins, it allows Kong to remember browser users that have previously authenticated. You can read the detailed documentations here.

NGINX CVE Fixes

Kong 1.3 ships with fixes to the NGINX HTTP/2 module (CVE-2019-9511CVE-2019-9513CVE-2019-9516). We also released Kong 1.0.4, 1.1.3, 1.2.2 to patch the vulnerabilities in older versions of Kong in case upgrade to 1.3 can not happen immediately.

OpenResty Version Bump

The version of OpenResty has been bumped to the latest OpenResty release – 1.15.8.1 which is based on Nginx 1.15.8. This release of OpenResty brought better behavior while closing upstream keepalive connections, ARM64 architecture support and LuaJIT GC64 mode. The most noticeable change is that Kong now runs ~10% faster in the baseline proxy benchmarks with key authentication thanks to the LuaJIT compiler generating more native code and OpenResty storing request context data more efficiently.

Additional New Features in Kong 1.3

Route by any request header

  • Kong’s router now has the ability to match Routes by any request header (not only Host).
  • This allows granular control over how incoming traffic are routed between services.
  • See documentation here.

Least-connections load-balancing

  • Kong can now send traffic to upstream services that have the least amount of connections. Improving upstream load distribution in certain use cases.
  • See documentation here.

Database export

  • The newly added kong config db_export CLI command can be used for creating a dump of the database content into a YAML file that is suitable for declarative config or importing back to the database later.
  • This allows easier creation of declarative config files.
  • This makes backup and version controlling of Kong configurations much easier.
  • See documentation here.

Proactively closing upstream keepalive connections

  • In older version of Kong, upstream connections are never closed by Kong. This can lead to race conditions as Kong may try to reuse a keepalived connection while the upstream attempts to close it.
  • If you have seen an “upstream prematurely closed connection” error in your Kong error.log, this release should significantly reduce or even eliminate this error in your deployment.
  • New configuration directives have been added to control this behavior, read the full Changelog to learn more.

More listening flags support

  • Especially the reuseport flag which can be used to improve load distribution and latency jitter if the number of Kong workers are large.
  • deferred and bind flag support has also been added. You can check NGINX listen directive documentation to understand the effect of using them.

Other Improvements and Bug Fixes

Kong 1.3 also contains improvements regarding new entities for storing CA Certificates (certificates without a private key), Admin API interface and more PDK functions. We also fixed a lot of bugs along the way. Because of the amount of new features in this release, we can not go over all of them in this blog post and instead encourage you to read the full Changelog here.

We also added a new section to the kong.conf template to better explain the capabilities of injected NGINX directives. For users who have customized templates for adding just a few NGINX directives, we recommend switching over to use the injected NGINX directives instead for better upgradability.

As always, the documentation for Kong 1.3 is available here. Additionally, as mentioned above, we will be discussing the key features in 1.3 in subsequent posts and on community calls, so stay tuned!

Thank you to our community of users, contributors, and core maintainers for your continuing support of Kong’s open source platform.

Please give Kong 1.3 a try, and be sure to let us know what you think!

Kong Nation

As usual, feel free to ask any question on Kong Nation, our Community forum. Learning from your feedback will allow us to better understand the mission-critical use-cases and keep improving Kong.

Happy Konging!

The post Kong 1.3 Released! Native gRPC Proxying, Upstream Mutual TLS Authentication, and Much More appeared first on KongHQ.

Kong and Istio: Setting up Service Mesh on Kubernetes with Kiali for Observability 

$
0
0

Service mesh is redefining the way we think about security, reliability, and observability when it comes to service-to-service communication. In a previous blog post about service mesh, we took a deep dive into our definition of this new pattern for inter-service communication. Today, we’re going to take you through how to use Istio, an open source cloud native service mesh for connecting and securing east-west traffic.

This step by step tutorial will walk you through how to install Istio service mesh on Kubernetes, control your north-south traffic with Kong, and add observability with Kiali.

Part 1: How to set up Istio on Kubernetes 

1. Set up Kubernetes Platform

To get started, you need to install and/or configure one of the various Kubernetes platforms. You can find all the necessary documentation for setup here. For local development, Minikube is a popular option if you have enough RAM to allocate to the Minikube virtual machine. Istio recommends 16 GB of memory and 4 CPUs. Due to burdensome hardware requirements, I will be using Google Kubernetes Engine (GKE) instead of Minikube. Here are the necessary steps to follow along:

(If you have Istio and Kubernetes set up and ready to go, jump to Part 2)

2. Set up GCP account and CLI

You will need to create a Google Cloud Platform (GCP) account. If you don’t have one, you can sign up here and receive free credits with a validity of 12 months. After signing up for an account, you will need to install the GCP SDK, which includes the gcloud CLI. We will use this to create the Kubernetes cluster. After installing the Cloud SDK, install the kubectl command-line tool by running the following command:

gcloud components install kubectl

Now that you have all the necessary tools installed, let’s dive into the fun part!

3. Create a new Kubernetes cluster

To do so, you first have to have an existing project. The following command will create a project with a project_id of “kong-istio-demo-project”. I also threw in a name just to give it more clarity.

gcloud projects create kong-istio-demo-project --name="Kong API Gateway with Istio"

To list all your existing projects and to ensure that that “kong-istio-demo-project” project was created successfully, type the following command:

gcloud projects list

With a project created, you can now create a cluster of running containers on GKE:

(Optional step) – If you are unsure which zone to use when you create your cluster, run the following command to list out all the compute zones and pick one. 

gcloud compute zones list

The following command will create a Kubernetes cluster. It will consist of 4 nodes, and sit on the us-east1-b compute zone.

gcloud container clusters create kong-istio-cluster \
--cluster-version latest \
--num-nodes 4 \
--zone us-east1-b \
--project kong-istio-demo-project

 

You’ll see a bunch of warnings but let’s zoom in on the relevant part at the bottom:

Yay! You have a cluster running with 4 nodes. Let’s get your credentials for kubectl. Using your project_id, cluster name, and compute zone, run the following command:

gcloud container clusters get-credentials kong-istio-cluster
--zone us-east1-b \
--project kong-istio-demo-project

Lastly, you will need to grant the cluster administrator (admin) permissions to the current user. To create the necessary RBAC rules for Istio, the current user requires admin permissions:

kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole=cluster-admin \
--user=$(gcloud config get-value core/account)

Now, if you get your nodes via kubectl, you should see all 4 nodes that you created on your cluster:

kubectl get nodes

4. Install Istio

To start, you will need to download Istio. You can either download it via the Istio release page or run the following command with a specific version number:

curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.2.4 sh -

Move into the Istio directory. The directory name may differ based on which version you downloaded. Since I specified 1.2.4 in the ISTIO_VERSION up above, I will be changing directory using the following command:

cd istio-1.2.4

And then you want to add the istioctl client to your PATH environment variable. The following command will append the Istio client to your existing PATH:

export PATH=$PWD/bin:$PATH

As you can see in the screenshot above, the Istio directory’s bin has been added to my path. We can now install Istio onto the cluster that we created earlier on GKE. To do so, you have to use kubectl apply to install all the Istio Custom Resource Definitions (CRDs) defined in the istio-1.2.4/install/kubernetes/helm/istio-init/files directory. This will create a new custom resource for each CRD object using the name and schema specified within the YAML files:

for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done

Once all the custom resources are created, we can install a demo profile that enforces strict mutual TLS authentication between all clients and servers. This profile installs an Istio sidecar on all newly deployed workloads. Therefore, it is important to only use this on a fresh Kubernetes cluster where all workloads will be Istio-enabled. While this demo will not cover Istio’s permissive mode, you can read more about it here. The following command will output a ton of lines, so I won’t be including the screenshot. Run this to install the istio-demo-auth demo profile on your existing cluster:

kubectl apply -f http://bit.ly/istiomtls

Check the services within the istio-system namespace to make sure everything ran smoothly. All services should have a cluster-ip except for the jaeger-agent:

kubectl get svc -n istio-system

That’s it for part 1! With all your services up and running, you successfully installed a service mesh on a Kubernetes cluster. If you decided to install your cluster locally on Minikube or use another cloud provider’s Kubernetes platform, be sure you install Istio with the strict mutual TLS demo profile

In part 2, we will deploy the Bookinfo Application, configure Kong declaratively, and visual our mesh using Kiali.  

Part 2: How to set up your Istio application with Kong and Kiali 

In Part 1, we covered how to create a Kubernetes cluster and how to install Istio with strict mTLS policy. If you’re just joining us at part 2, you do not have to follow the Google Kubernetes Engine (GKE) steps that we used in part 1. However, you do need Istio installed in a similar fashion that enforces mutual TLS authentication between all clients and servers. If you need to catch up and install Istio, follow our ‘Installing Istio’ section from part 1 of this blog or the official documentation

This is Istio’s Bookinfo Application diagram with Kong acting as the Ingress point:

You can follow the link above to get more details about the application. But to highlight the most important aspect of this diagram, notice that each service has an Envoy sidecar injected alongside it. The Envoy sidecar proxies are what handles the communication between all services.

For this demo, we will be focusing on the Kong service on the left. Kong excels as an Ingress point for any traffic entering your mesh. Kong is an open source gateway that offers extensibility with plugins. 

1. Installing the Bookinfo application

To start the installation process, make sure you are in the Istio installation directory. This should match the directory created during our Istio installation procedure.

Once you’re in the right directory, we need to label the namespace that will host our application. To do so, run:

kubectl label namespace default istio-injection=enabled

Having a namespace labeled istio-injection=enabled is necessary. Or else the default configuration will not inject a sidecar into the pods of your namespace. 

Now deploy your BookInfo application with the following command:

kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

Let’s double-check our services and pods to make sure that we have it all set up correctly:

kubectl get services

You should see four new services: details, productpage, ratings, and reviews. None of them have an external IP so we will use the Kong gateway to expose the necessary services. And to check pods, run the following command: 

kubectl get pods

This command outputs useful data so let’s take a second to understand it. If you examine the READY column, each pod has two containers running: the service and an Envoy sidecar injected alongside it. Another thing to highlight is that there are three review pods but only 1 review service. The Envoy sidecar will load balance the traffic to three different review pods that contain different versions, giving us the ability to A/B test our changes. With that said, you should now be able to access your product page! 

kubectl exec -it $(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}') -c ratings -- curl productpage:9080/productpage | grep -o "<title>.*</title>"

2. Kong DB-less with declarative configuration

To expose your services to the world, we will deploy Kong as the north-south traffic gateway. Kong 1.1 released with declarative configuration and DB-less mode. Declarative configuration allows you to specify the desired system state through a YAML or JSON file instead of a sequence of API calls. Using declarative config provides several key benefits to reduce complexity, increase automation and enhance system performance. Alongside Kubernetes’ ConfigMap feature, deploying Kong for Ingress control becomes simplified with one YAML file. 

Here is the gist of the YAML file we will use to deploy and configure Kong:

apiVersion: v1
kind: ConfigMap
metadata:
  name: kongconfig
data:
  kong.yml: |
    _format_version: "1.1"
    services:
    - url: "http://mockbin.org/"
      routes:
      - paths:
        - "/mockbin"
      plugins:
      - name: basic-auth
    - url: "http://productpage.default.svc:9080"
      routes:
      - paths:
        - "/"
      plugins:
      - name: rate-limiting
        config:
          minute: 60
          policy: local
    consumers:
    - username: kevin
      basicauth_credentials:
      - username: kevin
        password: abc123

As shown in the ConfigMap, we will be configuring Kong with two services. The first service is a hosted webpage: mockbin.org. Since we don’t want unauthorized people accessing this site, we will lock it down using an authentication plugin. Kong’s basic-auth plugin is one of many plugins that you can use to extend the functionality of your gateway. You can find prebuilt plugins here or explore the Plugin Development Guide to build your own. The second service that will sit behind Kong is the Bookinfo product page we deployed earlier. We will use the rate-limiting plugin to lightly protect this service. Granular control on plugins gives us simplicity AND modularity. Enough talk though, let’s deploy our gateway using:

kubectl apply -f https://bit.ly/kongyaml

To check if the Kong service and pods are up and running, run:

kubectl get pods,svc --sort-by=.metadata.creationTimestamp

When the gateway is running correctly, you will see an EXTERNAL-IP on the Kong service. Let’s export that to an environment variable so we can easily reference it in the remaining steps:

KONG_IP=$(kubectl get svc kong --template="{{range .status.loadBalancer.ingress}}{{.ip}}{{end}}")

Congratulations, you now have a service-mesh up and running with a way to access it securely! To view the product page service’s GUI, go to http://$KONG_IP/productpage. We have a rate-limit set for this service. To test the rate-limiting plugin, you can run a simple bash script like:

while true; do curl http://$KONG_IP/productpage; done

After you hit 60 calls within a minute, as defined in our ConfigMap, you will see a 429 status telling you that you hit your limit.

We can also test the routing to the external httpbin service. It should be inaccessible due to the authentication plugin we configured. Try it out by running:

curl -i http://$KONG_IP/mockbin

To recap, we successfully installed Istio with strict mTLS, deployed an application on the mesh, and secured the mesh using Kong with one YAML file. If you want to learn more about Kong and all its various features, check out the documentation page here. We have one last step for folks who would like a visualize representation.

3. Kiali to visualize it all

Kiali is a console that offers observability and service mesh configuration capabilities. During our Istio installation steps, we actually installed Kiali within the same YAML file. If we look at our existing services in the istio-system namespace, you should see Kiali up and running.

kubectl get svc -n istio-system

It does not come configured with an external IP, so we will have to use port-forward to access the GUI. But prior to that, let’s continuously send traffic to our mesh so we can see that in Kiali:

while true; do curl http://$KONG_IP/productpage; done

With that up and running, open up a new terminal and port-forward the Kiali service so we can access it locally:

Please note that this is only for demo purposes. In a production deployment, the service would be available outside the cluster and be easily accessible. You should not be using port-forward for regular operations in a production system. 

kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=kiali -o jsonpath='{.items[0].metadata.name}') 20001:20001

Now you can access the GUI through the following URL in your web browser:

http://localhost:20001/kiali/console 

There are a lot of features that Kiali offers, you can learn more about them on their official documentation. My favorite feature is the graphs that allow me to visualize the topology of the service mesh. 

That is all I have for this walk-through. If you enjoyed the technologies used in this post, please check out their repositories since they are all open-source and would love to have more contributors! Here are their links for your convenience:

Kong: [Official Documentation] [GitHub] [Twitter]

Kubernetes: [Official Documentation] [GitHub] [Twitter]

Istio: [Official Documentation] [GitHub] [Twitter]

Envoy: [Official Documentation] [GitHub] [Twitter]

Kiali: [Official Documentation] [GitHub] [Twitter]

Thank you for following along!

The post Kong and Istio: Setting up Service Mesh on Kubernetes with Kiali for Observability  appeared first on KongHQ.

How to Manage your gRPC Services with Kong

$
0
0

With the 1.3 release, Kong is now able to natively manage and proxy gRPC services. In this blog post, we’ll explain what gRPC is and how to manage your gRPC services with Kong.

What is gRPC?

gRPC is a remote procedure call (RPC) framework initially developed by Google circa 2015 that has seen growing adoption in recent years. Based on HTTP/2 for transport and using Protobuf as Interface Definition Language (IDL), gRPC has a number of capabilities that traditional REST APIs struggle with, such as bi-directional streaming and efficient binary encoding.

While Kong supports TCP streams since version 1.0, and, as such, can proxy any protocol built on top of TCP/TLS, we felt native support for gRPC would allow a growing user base to leverage Kong to manage their REST and gRPC services uniformly, including using some of the same Kong plugins they have already been using in their REST APIs.

Native gRPC Support

What follows is a step-by-step tutorial on how to set up Kong to proxy gRPC services, demonstrating two possible scenarios. In the first scenario, a single Route entry in Kong matches all gRPC methods from a service. In the second one, we have per-method Routes, which allows, for example, to apply different plugins to specific gRPC methods.

As gRPC uses HTTP/2 for transport, it is necessary to enable HTTP/2 proxy listeners in Kong. To do so, add the following property in your Kong configuration:

proxy_listen = 0.0.0.0:9080 http2, 0.0.0.0:9081 http2 ssl

Alternatively, you can also configure the proxy listener with environment variables:

KONG_PROXY_LISTEN="0.0.0.0:9080 http2, 0.0.0.0:9081 http2 ssl" bin/kong restart

In this guide, we will assume Kong is listening for HTTP/2 proxy requests on port 9080 and for secure HTTP/2 on port 9081.

We will use the gRPCurl command-line client and the grpcbin collection of mock gRPC services.

Case 1: Single Service and Route

We begin with a simple setup with a single gRPC Service and Route; all gRPC requests sent to Kong’s proxy port will match the same route.

Issue the following request to create a gRPC Service (assuming your gRPC server is listening in localhost, port 15002):

$ curl -XPOST localhost:8001/services \
  --data name=grpc \
  --data protocol=grpc \
  --data host=localhost \
  --data port=15002

Issue the following request to create a gRPC Route:

$ curl -XPOST localhost:8001/services/grpc/routes \
  --data protocols=grpc \
  --data name=catch-all \
  --data paths=/

Using gRPCurl, issue the following gRPC request:

$ grpcurl -v -d '{"greeting": "Kong 1.3!"}' -plaintext localhost:9080 hello.HelloService.SayHello

The response should resemble the following:

Resolved method descriptor:
rpc SayHello ( .hello.HelloRequest ) returns ( .hello.HelloResponse );
Request metadata to send:
(empty)
Response headers received:
content-type: application/grpc
date: Tue, 16 Jul 2019 21:37:36 GMT
server: openresty/1.15.8.1
via: kong/1.2.1
x-kong-proxy-latency: 0
x-kong-upstream-latency: 0
Response contents:
{
  "reply": "hello Kong 1.3!"
}
Response trailers received:
(empty)
Sent 1 request and received 1 response

Notice that Kong response headers, such as via and x-kong-proxy-latency, were inserted in the response.

Case 2: Single Service, Multiple Routes

Now we move on to a more complex use-case, where requests to separate gRPC methods map to different Routes in Kong, allowing for more flexible use of Kong plugins.

Building on top of the previous example, let’s create a few more routes, for individual gRPC methods. The gRPC “HelloService” service being used in this example exposes a few different methods, as we can see in its Protobuf definition (obtained from the gRPCbin repository):

syntax = "proto2";
package hello;
service HelloService {
  rpc SayHello(HelloRequest) returns (HelloResponse);
  rpc LotsOfReplies(HelloRequest) returns (stream HelloResponse);
  rpc LotsOfGreetings(stream HelloRequest) returns (HelloResponse);
  rpc BidiHello(stream HelloRequest) returns (stream HelloResponse);
}
message HelloRequest {
  optional string greeting = 1;
}
message HelloResponse {
  required string reply = 1;
}

We will create individual routes for its “SayHello” and “LotsOfReplies” methods.

Create a Route for “SayHello”:

$ curl -XPOST localhost:8001/services/grpc/routes \
  --data protocols=grpc \
  --data paths=/hello.HelloService/SayHello \
  --data name=say-hello

Create a Route for “LotsOfReplies”:

$ curl -XPOST localhost:8001/services/grpc/routes \
  --data protocols=grpc \
  --data paths=/hello.HelloService/LotsOfReplies \
  --data name=lots-of-replies

With this setup, gRPC requests to the “SayHello” method will match the first Route, while requests to “LotsOfReplies” will be routed to the latter.

Issue a gRPC request to the “SayHello” method:

$ grpcurl -v -d '{"greeting": "Kong 1.3!"}' \
  -H 'kong-debug: 1' -plaintext \
  localhost:9080 hello.HelloService.SayHello

(Notice we are sending a header kong-debug, which causes Kong to insert debugging information as response headers.)

The response should look like:

Resolved method descriptor:
rpc SayHello ( .hello.HelloRequest ) returns ( .hello.HelloResponse );
Request metadata to send:
kong-debug: 1
Response headers received:
content-type: application/grpc
date: Tue, 16 Jul 2019 21:57:00 GMT
kong-route-id: 390ef3d1-d092-4401-99ca-0b4e42453d97
kong-service-id: d82736b7-a4fd-4530-b575-c68d94c3493a
kong-service-name: s1
server: openresty/1.15.8.1
via: kong/1.2.1
x-kong-proxy-latency: 0
x-kong-upstream-latency: 0
Response contents:
{
  "reply": "hello Kong 1.3!"
}
Response trailers received:
(empty)
Sent 1 request and received 1 response

Notice the Route ID refers to the first route we created.

Similarly, let’s issue a request to the “LotsOfReplies” gRPC method:

$ grpcurl -v -d '{"greeting": "Kong 1.3!"}' \
  -H 'kong-debug: 1' -plaintext \
  localhost:9080 hello.HelloService.LotsOfReplies

The response should look like the following:

Resolved method descriptor:
rpc LotsOfReplies ( .hello.HelloRequest ) returns ( stream .hello.HelloResponse );
Request metadata to send:
kong-debug: 1
Response headers received:
content-type: application/grpc
date: Tue, 30 Jul 2019 22:21:40 GMT
kong-route-id: 133659bb-7e88-4ac5-b177-bc04b3974c87
kong-service-id: 31a87674-f984-4f75-8abc-85da478e204f
kong-service-name: grpc
server: openresty/1.15.8.1
via: kong/1.2.1
x-kong-proxy-latency: 14
x-kong-upstream-latency: 0
Response contents:
{
  "reply": "hello Kong 1.3!"
}
Response contents:
{
  "reply": "hello Kong 1.3!"
}
Response contents:
{
  "reply": "hello Kong 1.3!"
}
Response contents:
{
  "reply": "hello Kong 1.3!"
}
Response contents:
{
  "reply": "hello Kong 1.3!"
}
Response contents:
{
  "reply": "hello Kong 1.3!"
}
Response contents:
{
  "reply": "hello Kong 1.3!"
}
Response contents:
{
  "reply": "hello Kong 1.3!"
}
Response contents:
{
  "reply": "hello Kong 1.3!"
}
Response contents:
{
  "reply": "hello Kong 1.3!"
}
Response trailers received:
(empty)
Sent 1 request and received 10 responses

Notice that the kong-route-id response header now carries a different value and refers to the second Route created in this page.

Note: gRPC reflection requests will still be routed to the first route we created (the “catch-all” route), since the request matches neither SayHello nor LotsOfReplies routes.

Logging and Observability Plugins

As we mentioned earlier, Kong 1.3 gRPC support is compatible with logging and observability plugins. For
example, let’s try out the File Log and Zipkin plugins with gRPC.

File Log

Issue the following request to enable File Log on the “SayHello” route:

$ curl -X POST localhost:8001/routes/say-hello/plugins \
  --data name=file-log \
  --data config.path=grpc-say-hello.log

Follow the output of the log as gRPC requests are made to “SayHello”:

$ tail -f grpc-say-hello.log
{"latencies":{"request":8,"kong":5,"proxy":3},"service":{"host":"localhost","created_at":1564527408,"connect_timeout":60000,"id":"74a95d95-fbe4-4ddb-a448-b8faf07ece4c","protocol":"grpc","name":"grpc","read_timeout":60000,"port":15002,"updated_at":1564527408,"write_timeout":60000,"retries":5},"request":{"querystring":{},"size":"46","uri":"/hello.HelloService/SayHello","url":"http://localhost:9080/hello.HelloService/SayHello","headers":{"host":"localhost:9080","content-type":"application/grpc","kong-debug":"1","user-agent":"grpc-go/1.20.0-dev","te":"trailers"},"method":"POST"},"client_ip":"127.0.0.1","tries":[{"balancer_latency":0,"port":15002,"balancer_start":1564527732522,"ip":"127.0.0.1"}],"response":{"headers":{"kong-route-id":"e49f2df9-3e8e-4bdb-8ce6-2c505eac4ab6","content-type":"application/grpc","connection":"close","kong-service-name":"grpc","kong-service-id":"74a95d95-fbe4-4ddb-a448-b8faf07ece4c","kong-route-name":"say-hello","via":"kong/1.2.1","x-kong-proxy-latency":"5","x-kong-upstream-latency":"3"},"status":200,"size":"298"},"route":{"id":"e49f2df9-3e8e-4bdb-8ce6-2c505eac4ab6","updated_at":1564527431,"protocols":["grpc"],"created_at":1564527431,"service":{"id":"74a95d95-fbe4-4ddb-a448-b8faf07ece4c"},"name":"say-hello","preserve_host":false,"regex_priority":0,"strip_path":false,"paths":["/hello.HelloService/SayHello"],"https_redirect_status_code":426},"started_at":1564527732516}
{"latencies":{"request":3,"kong":1,"proxy":1},"service":{"host":"localhost","created_at":1564527408,"connect_timeout":60000,"id":"74a95d95-fbe4-4ddb-a448-b8faf07ece4c","protocol":"grpc","name":"grpc","read_timeout":60000,"port":15002,"updated_at":1564527408,"write_timeout":60000,"retries":5},"request":{"querystring":{},"size":"46","uri":"/hello.HelloService/SayHello","url":"http://localhost:9080/hello.HelloService/SayHello","headers":{"host":"localhost:9080","content-type":"application/grpc","kong-debug":"1","user-agent":"grpc-go/1.20.0-dev","te":"trailers"},"method":"POST"},"client_ip":"127.0.0.1","tries":[{"balancer_latency":0,"port":15002,"balancer_start":1564527733555,"ip":"127.0.0.1"}],"response":{"headers":{"kong-route-id":"e49f2df9-3e8e-4bdb-8ce6-2c505eac4ab6","content-type":"application/grpc","connection":"close","kong-service-name":"grpc","kong-service-id":"74a95d95-fbe4-4ddb-a448-b8faf07ece4c","kong-route-name":"say-hello","via":"kong/1.2.1","x-kong-proxy-latency":"1","x-kong-upstream-latency":"1"},"status":200,"size":"298"},"route":{"id":"e49f2df9-3e8e-4bdb-8ce6-2c505eac4ab6","updated_at":1564527431,"protocols":["grpc"],"created_at":1564527431,"service":{"id":"74a95d95-fbe4-4ddb-a448-b8faf07ece4c"},"name":"say-hello","preserve_host":false,"regex_priority":0,"strip_path":false,"paths":["/hello.HelloService/SayHello"],"https_redirect_status_code":426},"started_at":1564527733554}

Notice the gRPC requests were logged, with info such as the URI, HTTP verb, and latencies.

Zipkin

Start a Zipkin server:

$ docker run -d --name zipkin -p 9411:9411 openzipkin/zipkin

Enable the Zipkin plugin on the grpc Service:

curl -X POST localhost:8001/services/grpc/plugins \
    --data name=zipkin \
    --data config.http_endpoint=http://127.0.0.1:9411/api/v2/spans \
    --data config.sample_ratio=1

As requests are proxied, new spans will be sent to the Zipkin server and can be visualized through the Zipkin Index page, which is, by default, http://localhost:9411/zipkin:

To display Traces, click “Find Traces”, as shown above. The following screen will list all traces matching the search criteria:

A trace can be expanded by clicking into it:

Spans can also be extended, as displayed below:

Notice that, in this case, it’s a span for a gRPC reflection request.

What’s Next for gRPC support?

Future Kong releases will include support for natively handling Protobuf data, allowing gRPC compatibility with more plugins, such as request/response transformer.

As always, please get in touch with us through our community forum, Kong Nation, with any issues or questions about Kong gRPC support.

The post How to Manage your gRPC Services with Kong appeared first on KongHQ.

Introducing Kuma: The Universal Service Mesh

$
0
0

We are excited to announce the release of a new open source project, Kuma – a modern, universal control plane for service mesh! Kuma is based on Envoy, a powerful proxy designed for cloud native applications. Envoy has become the de-facto industry sidecar proxy, with service mesh becoming an important implementation in the cloud native ecosystem as monitoring, security and reliability become increasingly important for microservice applications at scale. 

“It’s been gratifying to see how quickly Envoy has been adopted by the tech community, and I’m really excited by Kong’s new ‘Kuma’ project,” said Matt Klein, creator of the Envoy proxy. “Kuma extends Envoy’s use cases and will make it faster and easier for companies to create cloud native applications and manage them in a service mesh.” 

Kuma addresses limitations of first-generation service mesh technologies by enabling seamless management of any service on the network, including L4 and L7 traffic, as well as microservices and APIs. Its fast data plane and advanced control plane makes it significantly easier to use and adopt across every team. Kuma runs on any platform, including Kubernetes, virtual machines, containers, bare metal and legacy environments, to enable a more practical cloud native journey across an entire organization. 

Out of the box, Kuma makes the underlying network safe, reliable and observable without having to change any code, while still allowing for advanced customization thanks to its mature control plane. The combination of its fast data plane and advanced control plane allows users to easily set permissions, expose metrics and set routing rules with just a few commands by either using native Kubernetes CRDs or a RESTful API. 

Kuma’s key features include:

  • Software Defined Security – Kuma enables mTLS for all L4 traffic. Permissions can also be easily set to ensure appropriate access control.
  • Powerful Productivity Capabilities – Kuma enables users to quickly implement tracing and logging, allowing users to better analyze metrics for rapid debugging.
  • Sophisticated Routing & Control – Kuma provides fine-grained traffic control capabilities, such as circuit breakers and health checks, to enhance L4 routing.

Kuma is supported and maintained by Kong with the vision to make data available anytime, anywhere by simplifying communication between services. Kuma has been built with lessons learned from more than 150 enterprise organizations running service meshes in production. Kong also plans to continue contributing upstream to the Envoy project.

Want to learn more about #KumaMesh? 

  • Read our Getting Started documentation 
  • Join us on Slack
  • Learn more about Kuma at Kong Summit 2019, Oct. 2-3 in San Francisco. Register with code Kuma19 before September 31st for a $99 ticket. 

The post Introducing Kuma: The Universal Service Mesh appeared first on KongHQ.

Viewing all 463 articles
Browse latest View live